00:00:00.001 Started by upstream project "spdk-dpdk-per-patch" build number 264 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.067 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.068 The recommended git tool is: git 00:00:00.068 using credential 00000000-0000-0000-0000-000000000002 00:00:00.069 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.208 > git --version # 'git version 2.39.2' 00:00:00.208 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.209 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.209 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.054 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.067 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.079 Checking out Revision 9b8cb13ca58b20128762541e7d6e360f21b83f5a (FETCH_HEAD) 00:00:04.079 > git config core.sparsecheckout # timeout=10 00:00:04.088 > git read-tree -mu HEAD # timeout=10 00:00:04.104 > git checkout -f 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=5 00:00:04.121 Commit message: "inventory: repurpose WFP74 and WFP75 to dev systems" 00:00:04.121 > git rev-list --no-walk 9b8cb13ca58b20128762541e7d6e360f21b83f5a # timeout=10 00:00:04.190 [Pipeline] Start of Pipeline 00:00:04.205 [Pipeline] library 00:00:04.207 Loading library shm_lib@master 00:00:04.207 Library shm_lib@master is cached. Copying from home. 00:00:04.220 [Pipeline] node 00:00:04.231 Running on VM-host-SM17 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.232 [Pipeline] { 00:00:04.242 [Pipeline] catchError 00:00:04.243 [Pipeline] { 00:00:04.255 [Pipeline] wrap 00:00:04.263 [Pipeline] { 00:00:04.267 [Pipeline] stage 00:00:04.269 [Pipeline] { (Prologue) 00:00:04.279 [Pipeline] echo 00:00:04.279 Node: VM-host-SM17 00:00:04.283 [Pipeline] cleanWs 00:00:04.289 [WS-CLEANUP] Deleting project workspace... 00:00:04.289 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.293 [WS-CLEANUP] done 00:00:04.436 [Pipeline] setCustomBuildProperty 00:00:04.493 [Pipeline] nodesByLabel 00:00:04.494 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.502 [Pipeline] httpRequest 00:00:04.506 HttpMethod: GET 00:00:04.506 URL: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:04.507 Sending request to url: http://10.211.164.101/packages/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:04.508 Response Code: HTTP/1.1 200 OK 00:00:04.509 Success: Status code 200 is in the accepted range: 200,404 00:00:04.509 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:05.050 [Pipeline] sh 00:00:05.325 + tar --no-same-owner -xf jbp_9b8cb13ca58b20128762541e7d6e360f21b83f5a.tar.gz 00:00:05.343 [Pipeline] httpRequest 00:00:05.346 HttpMethod: GET 00:00:05.347 URL: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:05.347 Sending request to url: http://10.211.164.101/packages/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:05.357 Response Code: HTTP/1.1 200 OK 00:00:05.358 Success: Status code 200 is in the accepted range: 200,404 00:00:05.359 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:33.079 [Pipeline] sh 00:00:33.357 + tar --no-same-owner -xf spdk_cf8ec7cfe7cc045dd74b4dc37b0f52cad9732631.tar.gz 00:00:36.652 [Pipeline] sh 00:00:36.931 + git -C spdk log --oneline -n5 00:00:36.931 cf8ec7cfe version: 24.09-pre 00:00:36.931 2d6134546 lib/ftl: Handle trim requests without VSS 00:00:36.931 106ad3793 lib/ftl: Rename unmap to trim 00:00:36.931 5555d51c8 lib/ftl: Add means to create new layout regions 00:00:36.931 5d89ebb72 lib/ftl: Add deinit handler to FTL mngt 00:00:36.947 [Pipeline] sh 00:00:37.228 + git -C spdk/dpdk fetch https://review.spdk.io/gerrit/spdk/dpdk refs/changes/88/22688/9 00:00:38.165 From https://review.spdk.io/gerrit/spdk/dpdk 00:00:38.165 * branch refs/changes/88/22688/9 -> FETCH_HEAD 00:00:38.176 [Pipeline] sh 00:00:38.453 + git -C spdk/dpdk checkout FETCH_HEAD 00:00:38.711 Previous HEAD position was 08f3a46de7 pmdinfogen: avoid empty string in ELFSymbol() 00:00:38.711 HEAD is now at 5aec55a1d6 meson/mlx5: Suppress -Wunused-value diagnostic 00:00:38.731 [Pipeline] writeFile 00:00:38.751 [Pipeline] sh 00:00:39.032 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:39.043 [Pipeline] sh 00:00:39.321 + cat autorun-spdk.conf 00:00:39.321 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.321 SPDK_TEST_NVMF=1 00:00:39.321 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.321 SPDK_TEST_URING=1 00:00:39.321 SPDK_TEST_USDT=1 00:00:39.321 SPDK_RUN_UBSAN=1 00:00:39.321 NET_TYPE=virt 00:00:39.321 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.327 RUN_NIGHTLY= 00:00:39.329 [Pipeline] } 00:00:39.347 [Pipeline] // stage 00:00:39.361 [Pipeline] stage 00:00:39.364 [Pipeline] { (Run VM) 00:00:39.378 [Pipeline] sh 00:00:39.654 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:39.654 + echo 'Start stage prepare_nvme.sh' 00:00:39.654 Start stage prepare_nvme.sh 00:00:39.654 + [[ -n 5 ]] 00:00:39.654 + disk_prefix=ex5 00:00:39.654 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:39.654 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:39.654 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:39.654 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.654 ++ SPDK_TEST_NVMF=1 00:00:39.654 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:39.654 ++ SPDK_TEST_URING=1 00:00:39.654 ++ SPDK_TEST_USDT=1 00:00:39.654 ++ SPDK_RUN_UBSAN=1 00:00:39.654 ++ NET_TYPE=virt 00:00:39.654 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.654 ++ RUN_NIGHTLY= 00:00:39.654 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:39.654 + nvme_files=() 00:00:39.654 + declare -A nvme_files 00:00:39.654 + backend_dir=/var/lib/libvirt/images/backends 00:00:39.654 + nvme_files['nvme.img']=5G 00:00:39.654 + nvme_files['nvme-cmb.img']=5G 00:00:39.654 + nvme_files['nvme-multi0.img']=4G 00:00:39.654 + nvme_files['nvme-multi1.img']=4G 00:00:39.654 + nvme_files['nvme-multi2.img']=4G 00:00:39.654 + nvme_files['nvme-openstack.img']=8G 00:00:39.654 + nvme_files['nvme-zns.img']=5G 00:00:39.654 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:39.654 + (( SPDK_TEST_FTL == 1 )) 00:00:39.654 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:39.654 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:39.654 + for nvme in "${!nvme_files[@]}" 00:00:39.654 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:39.654 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.654 + for nvme in "${!nvme_files[@]}" 00:00:39.654 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:39.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:39.655 + for nvme in "${!nvme_files[@]}" 00:00:39.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:39.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:39.655 + for nvme in "${!nvme_files[@]}" 00:00:39.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:39.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:39.655 + for nvme in "${!nvme_files[@]}" 00:00:39.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:39.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.655 + for nvme in "${!nvme_files[@]}" 00:00:39.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:39.655 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:39.655 + for nvme in "${!nvme_files[@]}" 00:00:39.655 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:39.913 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:39.913 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:39.913 + echo 'End stage prepare_nvme.sh' 00:00:39.913 End stage prepare_nvme.sh 00:00:39.926 [Pipeline] sh 00:00:40.206 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:40.206 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:00:40.206 00:00:40.206 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:40.206 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:40.206 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:40.206 HELP=0 00:00:40.206 DRY_RUN=0 00:00:40.206 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:00:40.206 NVME_DISKS_TYPE=nvme,nvme, 00:00:40.206 NVME_AUTO_CREATE=0 00:00:40.206 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:00:40.206 NVME_CMB=,, 00:00:40.206 NVME_PMR=,, 00:00:40.206 NVME_ZNS=,, 00:00:40.206 NVME_MS=,, 00:00:40.206 NVME_FDP=,, 00:00:40.206 SPDK_VAGRANT_DISTRO=fedora38 00:00:40.206 SPDK_VAGRANT_VMCPU=10 00:00:40.206 SPDK_VAGRANT_VMRAM=12288 00:00:40.206 SPDK_VAGRANT_PROVIDER=libvirt 00:00:40.206 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:40.206 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:40.206 SPDK_OPENSTACK_NETWORK=0 00:00:40.206 VAGRANT_PACKAGE_BOX=0 00:00:40.206 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:40.206 FORCE_DISTRO=true 00:00:40.206 VAGRANT_BOX_VERSION= 00:00:40.206 EXTRA_VAGRANTFILES= 00:00:40.206 NIC_MODEL=e1000 00:00:40.206 00:00:40.206 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:40.206 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:43.501 Bringing machine 'default' up with 'libvirt' provider... 00:00:44.068 ==> default: Creating image (snapshot of base box volume). 00:00:44.327 ==> default: Creating domain with the following settings... 00:00:44.327 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715883717_8b2526579709eaa09381 00:00:44.327 ==> default: -- Domain type: kvm 00:00:44.327 ==> default: -- Cpus: 10 00:00:44.327 ==> default: -- Feature: acpi 00:00:44.327 ==> default: -- Feature: apic 00:00:44.327 ==> default: -- Feature: pae 00:00:44.327 ==> default: -- Memory: 12288M 00:00:44.327 ==> default: -- Memory Backing: hugepages: 00:00:44.327 ==> default: -- Management MAC: 00:00:44.327 ==> default: -- Loader: 00:00:44.327 ==> default: -- Nvram: 00:00:44.327 ==> default: -- Base box: spdk/fedora38 00:00:44.327 ==> default: -- Storage pool: default 00:00:44.327 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715883717_8b2526579709eaa09381.img (20G) 00:00:44.327 ==> default: -- Volume Cache: default 00:00:44.327 ==> default: -- Kernel: 00:00:44.327 ==> default: -- Initrd: 00:00:44.327 ==> default: -- Graphics Type: vnc 00:00:44.327 ==> default: -- Graphics Port: -1 00:00:44.327 ==> default: -- Graphics IP: 127.0.0.1 00:00:44.327 ==> default: -- Graphics Password: Not defined 00:00:44.327 ==> default: -- Video Type: cirrus 00:00:44.327 ==> default: -- Video VRAM: 9216 00:00:44.327 ==> default: -- Sound Type: 00:00:44.327 ==> default: -- Keymap: en-us 00:00:44.327 ==> default: -- TPM Path: 00:00:44.327 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:44.327 ==> default: -- Command line args: 00:00:44.327 ==> default: -> value=-device, 00:00:44.327 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:44.327 ==> default: -> value=-drive, 00:00:44.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:44.327 ==> default: -> value=-device, 00:00:44.327 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.327 ==> default: -> value=-device, 00:00:44.327 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:44.327 ==> default: -> value=-drive, 00:00:44.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:44.327 ==> default: -> value=-device, 00:00:44.327 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.327 ==> default: -> value=-drive, 00:00:44.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:44.327 ==> default: -> value=-device, 00:00:44.327 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.327 ==> default: -> value=-drive, 00:00:44.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:44.327 ==> default: -> value=-device, 00:00:44.327 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:44.327 ==> default: Creating shared folders metadata... 00:00:44.327 ==> default: Starting domain. 00:00:46.233 ==> default: Waiting for domain to get an IP address... 00:01:04.316 ==> default: Waiting for SSH to become available... 00:01:04.316 ==> default: Configuring and enabling network interfaces... 00:01:07.600 default: SSH address: 192.168.121.209:22 00:01:07.600 default: SSH username: vagrant 00:01:07.600 default: SSH auth method: private key 00:01:10.134 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:18.312 ==> default: Mounting SSHFS shared folder... 00:01:19.248 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:19.248 ==> default: Checking Mount.. 00:01:20.626 ==> default: Folder Successfully Mounted! 00:01:20.626 ==> default: Running provisioner: file... 00:01:21.195 default: ~/.gitconfig => .gitconfig 00:01:21.454 00:01:21.454 SUCCESS! 00:01:21.454 00:01:21.454 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:21.454 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:21.454 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:21.454 00:01:21.464 [Pipeline] } 00:01:21.483 [Pipeline] // stage 00:01:21.490 [Pipeline] dir 00:01:21.491 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:21.492 [Pipeline] { 00:01:21.506 [Pipeline] catchError 00:01:21.508 [Pipeline] { 00:01:21.521 [Pipeline] sh 00:01:21.799 + vagrant ssh-config --host vagrant 00:01:21.799 + sed -ne /^Host/,$p 00:01:21.799 + tee ssh_conf 00:01:26.003 Host vagrant 00:01:26.003 HostName 192.168.121.209 00:01:26.003 User vagrant 00:01:26.003 Port 22 00:01:26.003 UserKnownHostsFile /dev/null 00:01:26.003 StrictHostKeyChecking no 00:01:26.003 PasswordAuthentication no 00:01:26.003 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:26.003 IdentitiesOnly yes 00:01:26.003 LogLevel FATAL 00:01:26.003 ForwardAgent yes 00:01:26.003 ForwardX11 yes 00:01:26.003 00:01:26.018 [Pipeline] withEnv 00:01:26.020 [Pipeline] { 00:01:26.037 [Pipeline] sh 00:01:26.316 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:26.316 source /etc/os-release 00:01:26.316 [[ -e /image.version ]] && img=$(< /image.version) 00:01:26.316 # Minimal, systemd-like check. 00:01:26.316 if [[ -e /.dockerenv ]]; then 00:01:26.316 # Clear garbage from the node's name: 00:01:26.316 # agt-er_autotest_547-896 -> autotest_547-896 00:01:26.316 # $HOSTNAME is the actual container id 00:01:26.316 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:26.316 if mountpoint -q /etc/hostname; then 00:01:26.316 # We can assume this is a mount from a host where container is running, 00:01:26.316 # so fetch its hostname to easily identify the target swarm worker. 00:01:26.316 container="$(< /etc/hostname) ($agent)" 00:01:26.316 else 00:01:26.316 # Fallback 00:01:26.316 container=$agent 00:01:26.316 fi 00:01:26.316 fi 00:01:26.316 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:26.316 00:01:26.587 [Pipeline] } 00:01:26.608 [Pipeline] // withEnv 00:01:26.617 [Pipeline] setCustomBuildProperty 00:01:26.632 [Pipeline] stage 00:01:26.635 [Pipeline] { (Tests) 00:01:26.657 [Pipeline] sh 00:01:26.937 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:27.211 [Pipeline] timeout 00:01:27.211 Timeout set to expire in 40 min 00:01:27.213 [Pipeline] { 00:01:27.231 [Pipeline] sh 00:01:27.510 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:28.077 HEAD is now at cf8ec7cfe version: 24.09-pre 00:01:28.091 [Pipeline] sh 00:01:28.370 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:28.642 [Pipeline] sh 00:01:28.921 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:29.188 [Pipeline] sh 00:01:29.462 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:29.462 ++ readlink -f spdk_repo 00:01:29.720 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:29.720 + [[ -n /home/vagrant/spdk_repo ]] 00:01:29.720 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:29.720 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:29.720 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:29.720 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:29.720 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:29.720 + cd /home/vagrant/spdk_repo 00:01:29.720 + source /etc/os-release 00:01:29.720 ++ NAME='Fedora Linux' 00:01:29.720 ++ VERSION='38 (Cloud Edition)' 00:01:29.720 ++ ID=fedora 00:01:29.720 ++ VERSION_ID=38 00:01:29.720 ++ VERSION_CODENAME= 00:01:29.720 ++ PLATFORM_ID=platform:f38 00:01:29.720 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:29.720 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:29.720 ++ LOGO=fedora-logo-icon 00:01:29.720 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:29.720 ++ HOME_URL=https://fedoraproject.org/ 00:01:29.720 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:29.720 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:29.720 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:29.720 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:29.720 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:29.720 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:29.720 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:29.720 ++ SUPPORT_END=2024-05-14 00:01:29.720 ++ VARIANT='Cloud Edition' 00:01:29.720 ++ VARIANT_ID=cloud 00:01:29.720 + uname -a 00:01:29.720 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:29.720 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:29.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:30.237 Hugepages 00:01:30.237 node hugesize free / total 00:01:30.237 node0 1048576kB 0 / 0 00:01:30.237 node0 2048kB 0 / 0 00:01:30.237 00:01:30.237 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.237 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:30.237 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:30.237 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:30.237 + rm -f /tmp/spdk-ld-path 00:01:30.237 + source autorun-spdk.conf 00:01:30.237 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.237 ++ SPDK_TEST_NVMF=1 00:01:30.237 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.237 ++ SPDK_TEST_URING=1 00:01:30.237 ++ SPDK_TEST_USDT=1 00:01:30.237 ++ SPDK_RUN_UBSAN=1 00:01:30.237 ++ NET_TYPE=virt 00:01:30.237 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.237 ++ RUN_NIGHTLY= 00:01:30.237 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.237 + [[ -n '' ]] 00:01:30.237 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:30.237 + for M in /var/spdk/build-*-manifest.txt 00:01:30.237 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.237 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.237 + for M in /var/spdk/build-*-manifest.txt 00:01:30.237 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.237 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.237 ++ uname 00:01:30.237 + [[ Linux == \L\i\n\u\x ]] 00:01:30.237 + sudo dmesg -T 00:01:30.237 + sudo dmesg --clear 00:01:30.237 + dmesg_pid=5110 00:01:30.237 + [[ Fedora Linux == FreeBSD ]] 00:01:30.237 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.237 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.237 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.237 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.237 + sudo dmesg -Tw 00:01:30.237 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.237 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.237 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.237 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.237 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.237 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.238 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.238 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.238 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.238 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.238 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:30.238 Test configuration: 00:01:30.238 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.238 SPDK_TEST_NVMF=1 00:01:30.238 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.238 SPDK_TEST_URING=1 00:01:30.238 SPDK_TEST_USDT=1 00:01:30.238 SPDK_RUN_UBSAN=1 00:01:30.238 NET_TYPE=virt 00:01:30.238 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.496 RUN_NIGHTLY= 18:22:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:30.496 18:22:43 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:30.496 18:22:43 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.496 18:22:43 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.497 18:22:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.497 18:22:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.497 18:22:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.497 18:22:43 -- paths/export.sh@5 -- $ export PATH 00:01:30.497 18:22:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.497 18:22:43 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:30.497 18:22:43 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:30.497 18:22:43 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715883763.XXXXXX 00:01:30.497 18:22:43 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715883763.nUSomq 00:01:30.497 18:22:43 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:30.497 18:22:43 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:30.497 18:22:43 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:30.497 18:22:43 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:30.497 18:22:43 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.497 18:22:43 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:30.497 18:22:43 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:30.497 18:22:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.497 18:22:43 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:30.497 18:22:43 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:30.497 18:22:43 -- pm/common@17 -- $ local monitor 00:01:30.497 18:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.497 18:22:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.497 18:22:43 -- pm/common@25 -- $ sleep 1 00:01:30.497 18:22:43 -- pm/common@21 -- $ date +%s 00:01:30.497 18:22:43 -- pm/common@21 -- $ date +%s 00:01:30.497 18:22:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715883763 00:01:30.497 18:22:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715883763 00:01:30.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715883763_collect-vmstat.pm.log 00:01:30.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715883763_collect-cpu-load.pm.log 00:01:31.433 18:22:44 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:31.433 18:22:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.433 18:22:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.433 18:22:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:31.433 18:22:44 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.433 Thu May 16 06:22:44 PM UTC 2024 00:01:31.433 18:22:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.433 v24.09-pre 00:01:31.433 18:22:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:31.433 18:22:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.433 18:22:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.433 18:22:44 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:31.433 18:22:44 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:31.433 18:22:44 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.433 ************************************ 00:01:31.433 START TEST ubsan 00:01:31.433 ************************************ 00:01:31.433 using ubsan 00:01:31.433 18:22:44 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:31.433 00:01:31.433 real 0m0.000s 00:01:31.433 user 0m0.000s 00:01:31.433 sys 0m0.000s 00:01:31.433 18:22:44 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:31.433 ************************************ 00:01:31.433 END TEST ubsan 00:01:31.433 ************************************ 00:01:31.433 18:22:44 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.433 18:22:44 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.433 18:22:44 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.433 18:22:44 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.433 18:22:44 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.433 18:22:44 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.433 18:22:44 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.433 18:22:44 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.433 18:22:44 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.433 18:22:44 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:31.692 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:31.692 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:31.951 Using 'verbs' RDMA provider 00:01:47.838 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:00.064 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:00.064 Creating mk/config.mk...done. 00:02:00.064 Creating mk/cc.flags.mk...done. 00:02:00.064 Type 'make' to build. 00:02:00.064 18:23:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:00.064 18:23:12 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:00.064 18:23:12 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:00.064 18:23:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.064 ************************************ 00:02:00.064 START TEST make 00:02:00.064 ************************************ 00:02:00.064 18:23:12 make -- common/autotest_common.sh@1121 -- $ make -j10 00:02:00.064 make[1]: Nothing to be done for 'all'. 00:02:12.274 The Meson build system 00:02:12.274 Version: 1.3.1 00:02:12.274 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:12.274 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:12.274 Build type: native build 00:02:12.274 Program cat found: YES (/usr/bin/cat) 00:02:12.274 Project name: DPDK 00:02:12.274 Project version: 24.03.0 00:02:12.274 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:12.274 C linker for the host machine: cc ld.bfd 2.39-16 00:02:12.274 Host machine cpu family: x86_64 00:02:12.274 Host machine cpu: x86_64 00:02:12.274 Message: ## Building in Developer Mode ## 00:02:12.274 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.274 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.274 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.274 Program python3 found: YES (/usr/bin/python3) 00:02:12.274 Program cat found: YES (/usr/bin/cat) 00:02:12.274 Compiler for C supports arguments -march=native: YES 00:02:12.274 Checking for size of "void *" : 8 00:02:12.274 Checking for size of "void *" : 8 (cached) 00:02:12.274 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:12.274 Library m found: YES 00:02:12.274 Library numa found: YES 00:02:12.274 Has header "numaif.h" : YES 00:02:12.274 Library fdt found: NO 00:02:12.274 Library execinfo found: NO 00:02:12.274 Has header "execinfo.h" : YES 00:02:12.274 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:12.274 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.274 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.274 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.274 Run-time dependency openssl found: YES 3.0.9 00:02:12.274 Run-time dependency libpcap found: YES 1.10.4 00:02:12.274 Has header "pcap.h" with dependency libpcap: YES 00:02:12.274 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.274 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.274 Compiler for C supports arguments -Wformat: YES 00:02:12.274 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.274 Compiler for C supports arguments -Wformat-security: NO 00:02:12.274 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.274 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.274 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.274 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.274 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.274 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.274 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.274 Compiler for C supports arguments -Wundef: YES 00:02:12.274 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.274 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.274 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.274 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.274 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.274 Program objdump found: YES (/usr/bin/objdump) 00:02:12.274 Compiler for C supports arguments -mavx512f: YES 00:02:12.274 Checking if "AVX512 checking" compiles: YES 00:02:12.274 Fetching value of define "__SSE4_2__" : 1 00:02:12.274 Fetching value of define "__AES__" : 1 00:02:12.274 Fetching value of define "__AVX__" : 1 00:02:12.274 Fetching value of define "__AVX2__" : 1 00:02:12.275 Fetching value of define "__AVX512BW__" : (undefined) 00:02:12.275 Fetching value of define "__AVX512CD__" : (undefined) 00:02:12.275 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:12.275 Fetching value of define "__AVX512F__" : (undefined) 00:02:12.275 Fetching value of define "__AVX512VL__" : (undefined) 00:02:12.275 Fetching value of define "__PCLMUL__" : 1 00:02:12.275 Fetching value of define "__RDRND__" : 1 00:02:12.275 Fetching value of define "__RDSEED__" : 1 00:02:12.275 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.275 Fetching value of define "__znver1__" : (undefined) 00:02:12.275 Fetching value of define "__znver2__" : (undefined) 00:02:12.275 Fetching value of define "__znver3__" : (undefined) 00:02:12.275 Fetching value of define "__znver4__" : (undefined) 00:02:12.275 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.275 Message: lib/log: Defining dependency "log" 00:02:12.275 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.275 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.275 Checking for function "getentropy" : NO 00:02:12.275 Message: lib/eal: Defining dependency "eal" 00:02:12.275 Message: lib/ring: Defining dependency "ring" 00:02:12.275 Message: lib/rcu: Defining dependency "rcu" 00:02:12.275 Message: lib/mempool: Defining dependency "mempool" 00:02:12.275 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.275 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.275 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:12.275 Compiler for C supports arguments -mpclmul: YES 00:02:12.275 Compiler for C supports arguments -maes: YES 00:02:12.275 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.275 Compiler for C supports arguments -mavx512bw: YES 00:02:12.275 Compiler for C supports arguments -mavx512dq: YES 00:02:12.275 Compiler for C supports arguments -mavx512vl: YES 00:02:12.275 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.275 Compiler for C supports arguments -mavx2: YES 00:02:12.275 Compiler for C supports arguments -mavx: YES 00:02:12.275 Message: lib/net: Defining dependency "net" 00:02:12.275 Message: lib/meter: Defining dependency "meter" 00:02:12.275 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.275 Message: lib/pci: Defining dependency "pci" 00:02:12.275 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.275 Message: lib/hash: Defining dependency "hash" 00:02:12.275 Message: lib/timer: Defining dependency "timer" 00:02:12.275 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.275 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.275 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.275 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.275 Message: lib/power: Defining dependency "power" 00:02:12.275 Message: lib/reorder: Defining dependency "reorder" 00:02:12.275 Message: lib/security: Defining dependency "security" 00:02:12.275 lib/meson.build:163: WARNING: Cannot disable mandatory library "stack" 00:02:12.275 Message: lib/stack: Defining dependency "stack" 00:02:12.275 Has header "linux/userfaultfd.h" : YES 00:02:12.275 Has header "linux/vduse.h" : YES 00:02:12.275 Message: lib/vhost: Defining dependency "vhost" 00:02:12.275 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.275 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.275 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.275 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.275 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.275 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.275 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.275 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.275 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.275 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.275 Program doxygen found: YES (/usr/bin/doxygen) 00:02:12.275 Configuring doxy-api-html.conf using configuration 00:02:12.275 Configuring doxy-api-man.conf using configuration 00:02:12.275 Program mandb found: YES (/usr/bin/mandb) 00:02:12.275 Program sphinx-build found: NO 00:02:12.275 Configuring rte_build_config.h using configuration 00:02:12.275 Message: 00:02:12.275 ================= 00:02:12.275 Applications Enabled 00:02:12.275 ================= 00:02:12.275 00:02:12.275 apps: 00:02:12.275 00:02:12.275 00:02:12.275 Message: 00:02:12.275 ================= 00:02:12.275 Libraries Enabled 00:02:12.275 ================= 00:02:12.275 00:02:12.275 libs: 00:02:12.275 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.275 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.275 cryptodev, dmadev, power, reorder, security, stack, vhost, 00:02:12.275 00:02:12.275 Message: 00:02:12.275 =============== 00:02:12.275 Drivers Enabled 00:02:12.275 =============== 00:02:12.275 00:02:12.275 common: 00:02:12.275 00:02:12.275 bus: 00:02:12.275 pci, vdev, 00:02:12.275 mempool: 00:02:12.275 ring, 00:02:12.275 dma: 00:02:12.275 00:02:12.275 net: 00:02:12.275 00:02:12.275 crypto: 00:02:12.275 00:02:12.275 compress: 00:02:12.275 00:02:12.275 vdpa: 00:02:12.275 00:02:12.275 00:02:12.275 Message: 00:02:12.275 ================= 00:02:12.275 Content Skipped 00:02:12.275 ================= 00:02:12.275 00:02:12.275 apps: 00:02:12.275 dumpcap: explicitly disabled via build config 00:02:12.275 graph: explicitly disabled via build config 00:02:12.275 pdump: explicitly disabled via build config 00:02:12.275 proc-info: explicitly disabled via build config 00:02:12.275 test-acl: explicitly disabled via build config 00:02:12.275 test-bbdev: explicitly disabled via build config 00:02:12.275 test-cmdline: explicitly disabled via build config 00:02:12.275 test-compress-perf: explicitly disabled via build config 00:02:12.275 test-crypto-perf: explicitly disabled via build config 00:02:12.275 test-dma-perf: explicitly disabled via build config 00:02:12.275 test-eventdev: explicitly disabled via build config 00:02:12.275 test-fib: explicitly disabled via build config 00:02:12.275 test-flow-perf: explicitly disabled via build config 00:02:12.275 test-gpudev: explicitly disabled via build config 00:02:12.275 test-mldev: explicitly disabled via build config 00:02:12.275 test-pipeline: explicitly disabled via build config 00:02:12.275 test-pmd: explicitly disabled via build config 00:02:12.275 test-regex: explicitly disabled via build config 00:02:12.275 test-sad: explicitly disabled via build config 00:02:12.276 test-security-perf: explicitly disabled via build config 00:02:12.276 00:02:12.276 libs: 00:02:12.276 argparse: explicitly disabled via build config 00:02:12.276 metrics: explicitly disabled via build config 00:02:12.276 acl: explicitly disabled via build config 00:02:12.276 bbdev: explicitly disabled via build config 00:02:12.276 bitratestats: explicitly disabled via build config 00:02:12.276 bpf: explicitly disabled via build config 00:02:12.276 cfgfile: explicitly disabled via build config 00:02:12.276 distributor: explicitly disabled via build config 00:02:12.276 efd: explicitly disabled via build config 00:02:12.276 eventdev: explicitly disabled via build config 00:02:12.276 dispatcher: explicitly disabled via build config 00:02:12.276 gpudev: explicitly disabled via build config 00:02:12.276 gro: explicitly disabled via build config 00:02:12.276 gso: explicitly disabled via build config 00:02:12.276 ip_frag: explicitly disabled via build config 00:02:12.276 jobstats: explicitly disabled via build config 00:02:12.276 latencystats: explicitly disabled via build config 00:02:12.276 lpm: explicitly disabled via build config 00:02:12.276 member: explicitly disabled via build config 00:02:12.276 pcapng: explicitly disabled via build config 00:02:12.276 rawdev: explicitly disabled via build config 00:02:12.276 regexdev: explicitly disabled via build config 00:02:12.276 mldev: explicitly disabled via build config 00:02:12.276 rib: explicitly disabled via build config 00:02:12.276 sched: explicitly disabled via build config 00:02:12.276 ipsec: explicitly disabled via build config 00:02:12.276 pdcp: explicitly disabled via build config 00:02:12.276 fib: explicitly disabled via build config 00:02:12.276 port: explicitly disabled via build config 00:02:12.276 pdump: explicitly disabled via build config 00:02:12.276 table: explicitly disabled via build config 00:02:12.276 pipeline: explicitly disabled via build config 00:02:12.276 graph: explicitly disabled via build config 00:02:12.276 node: explicitly disabled via build config 00:02:12.276 00:02:12.276 drivers: 00:02:12.276 common/cpt: not in enabled drivers build config 00:02:12.276 common/dpaax: not in enabled drivers build config 00:02:12.276 common/iavf: not in enabled drivers build config 00:02:12.276 common/idpf: not in enabled drivers build config 00:02:12.276 common/ionic: not in enabled drivers build config 00:02:12.276 common/mvep: not in enabled drivers build config 00:02:12.276 common/octeontx: not in enabled drivers build config 00:02:12.276 bus/auxiliary: not in enabled drivers build config 00:02:12.276 bus/cdx: not in enabled drivers build config 00:02:12.276 bus/dpaa: not in enabled drivers build config 00:02:12.276 bus/fslmc: not in enabled drivers build config 00:02:12.276 bus/ifpga: not in enabled drivers build config 00:02:12.276 bus/platform: not in enabled drivers build config 00:02:12.276 bus/uacce: not in enabled drivers build config 00:02:12.276 bus/vmbus: not in enabled drivers build config 00:02:12.276 common/cnxk: not in enabled drivers build config 00:02:12.276 common/mlx5: not in enabled drivers build config 00:02:12.276 common/nfp: not in enabled drivers build config 00:02:12.276 common/nitrox: not in enabled drivers build config 00:02:12.276 common/qat: not in enabled drivers build config 00:02:12.276 common/sfc_efx: not in enabled drivers build config 00:02:12.276 mempool/bucket: not in enabled drivers build config 00:02:12.276 mempool/cnxk: not in enabled drivers build config 00:02:12.276 mempool/dpaa: not in enabled drivers build config 00:02:12.276 mempool/dpaa2: not in enabled drivers build config 00:02:12.276 mempool/octeontx: not in enabled drivers build config 00:02:12.276 mempool/stack: not in enabled drivers build config 00:02:12.276 dma/cnxk: not in enabled drivers build config 00:02:12.276 dma/dpaa: not in enabled drivers build config 00:02:12.276 dma/dpaa2: not in enabled drivers build config 00:02:12.276 dma/hisilicon: not in enabled drivers build config 00:02:12.276 dma/idxd: not in enabled drivers build config 00:02:12.276 dma/ioat: not in enabled drivers build config 00:02:12.276 dma/skeleton: not in enabled drivers build config 00:02:12.276 net/af_packet: not in enabled drivers build config 00:02:12.276 net/af_xdp: not in enabled drivers build config 00:02:12.276 net/ark: not in enabled drivers build config 00:02:12.276 net/atlantic: not in enabled drivers build config 00:02:12.276 net/avp: not in enabled drivers build config 00:02:12.276 net/axgbe: not in enabled drivers build config 00:02:12.276 net/bnx2x: not in enabled drivers build config 00:02:12.276 net/bnxt: not in enabled drivers build config 00:02:12.276 net/bonding: not in enabled drivers build config 00:02:12.276 net/cnxk: not in enabled drivers build config 00:02:12.276 net/cpfl: not in enabled drivers build config 00:02:12.276 net/cxgbe: not in enabled drivers build config 00:02:12.276 net/dpaa: not in enabled drivers build config 00:02:12.276 net/dpaa2: not in enabled drivers build config 00:02:12.276 net/e1000: not in enabled drivers build config 00:02:12.276 net/ena: not in enabled drivers build config 00:02:12.276 net/enetc: not in enabled drivers build config 00:02:12.276 net/enetfec: not in enabled drivers build config 00:02:12.276 net/enic: not in enabled drivers build config 00:02:12.276 net/failsafe: not in enabled drivers build config 00:02:12.276 net/fm10k: not in enabled drivers build config 00:02:12.276 net/gve: not in enabled drivers build config 00:02:12.276 net/hinic: not in enabled drivers build config 00:02:12.276 net/hns3: not in enabled drivers build config 00:02:12.276 net/i40e: not in enabled drivers build config 00:02:12.276 net/iavf: not in enabled drivers build config 00:02:12.276 net/ice: not in enabled drivers build config 00:02:12.276 net/idpf: not in enabled drivers build config 00:02:12.276 net/igc: not in enabled drivers build config 00:02:12.276 net/ionic: not in enabled drivers build config 00:02:12.276 net/ipn3ke: not in enabled drivers build config 00:02:12.276 net/ixgbe: not in enabled drivers build config 00:02:12.276 net/mana: not in enabled drivers build config 00:02:12.276 net/memif: not in enabled drivers build config 00:02:12.276 net/mlx4: not in enabled drivers build config 00:02:12.276 net/mlx5: not in enabled drivers build config 00:02:12.276 net/mvneta: not in enabled drivers build config 00:02:12.276 net/mvpp2: not in enabled drivers build config 00:02:12.276 net/netvsc: not in enabled drivers build config 00:02:12.276 net/nfb: not in enabled drivers build config 00:02:12.276 net/nfp: not in enabled drivers build config 00:02:12.276 net/ngbe: not in enabled drivers build config 00:02:12.276 net/null: not in enabled drivers build config 00:02:12.276 net/octeontx: not in enabled drivers build config 00:02:12.276 net/octeon_ep: not in enabled drivers build config 00:02:12.276 net/pcap: not in enabled drivers build config 00:02:12.276 net/pfe: not in enabled drivers build config 00:02:12.276 net/qede: not in enabled drivers build config 00:02:12.276 net/ring: not in enabled drivers build config 00:02:12.276 net/sfc: not in enabled drivers build config 00:02:12.276 net/softnic: not in enabled drivers build config 00:02:12.276 net/tap: not in enabled drivers build config 00:02:12.276 net/thunderx: not in enabled drivers build config 00:02:12.277 net/txgbe: not in enabled drivers build config 00:02:12.277 net/vdev_netvsc: not in enabled drivers build config 00:02:12.277 net/vhost: not in enabled drivers build config 00:02:12.277 net/virtio: not in enabled drivers build config 00:02:12.277 net/vmxnet3: not in enabled drivers build config 00:02:12.277 raw/*: missing internal dependency, "rawdev" 00:02:12.277 crypto/armv8: not in enabled drivers build config 00:02:12.277 crypto/bcmfs: not in enabled drivers build config 00:02:12.277 crypto/caam_jr: not in enabled drivers build config 00:02:12.277 crypto/ccp: not in enabled drivers build config 00:02:12.277 crypto/cnxk: not in enabled drivers build config 00:02:12.277 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.277 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.277 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.277 crypto/mlx5: not in enabled drivers build config 00:02:12.277 crypto/mvsam: not in enabled drivers build config 00:02:12.277 crypto/nitrox: not in enabled drivers build config 00:02:12.277 crypto/null: not in enabled drivers build config 00:02:12.277 crypto/octeontx: not in enabled drivers build config 00:02:12.277 crypto/openssl: not in enabled drivers build config 00:02:12.277 crypto/scheduler: not in enabled drivers build config 00:02:12.277 crypto/uadk: not in enabled drivers build config 00:02:12.277 crypto/virtio: not in enabled drivers build config 00:02:12.277 compress/isal: not in enabled drivers build config 00:02:12.277 compress/mlx5: not in enabled drivers build config 00:02:12.277 compress/nitrox: not in enabled drivers build config 00:02:12.277 compress/octeontx: not in enabled drivers build config 00:02:12.277 compress/zlib: not in enabled drivers build config 00:02:12.277 regex/*: missing internal dependency, "regexdev" 00:02:12.277 ml/*: missing internal dependency, "mldev" 00:02:12.277 vdpa/ifc: not in enabled drivers build config 00:02:12.277 vdpa/mlx5: not in enabled drivers build config 00:02:12.277 vdpa/nfp: not in enabled drivers build config 00:02:12.277 vdpa/sfc: not in enabled drivers build config 00:02:12.277 event/*: missing internal dependency, "eventdev" 00:02:12.277 baseband/*: missing internal dependency, "bbdev" 00:02:12.277 gpu/*: missing internal dependency, "gpudev" 00:02:12.277 00:02:12.277 00:02:12.277 Build targets in project: 88 00:02:12.277 00:02:12.277 DPDK 24.03.0 00:02:12.277 00:02:12.277 User defined options 00:02:12.277 buildtype : debug 00:02:12.277 default_library : shared 00:02:12.277 libdir : lib 00:02:12.277 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.277 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.277 c_link_args : 00:02:12.277 cpu_instruction_set: native 00:02:12.277 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:12.277 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:12.277 enable_docs : false 00:02:12.277 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.277 enable_kmods : false 00:02:12.277 tests : false 00:02:12.277 00:02:12.277 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.277 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:12.277 [1/274] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.277 [2/274] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.277 [3/274] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.277 [4/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.277 [5/274] Linking static target lib/librte_kvargs.a 00:02:12.277 [6/274] Linking static target lib/librte_log.a 00:02:12.277 [7/274] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.277 [8/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.277 [9/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.277 [10/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.277 [11/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.277 [12/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.277 [13/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.277 [14/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:12.277 [15/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.535 [16/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:12.535 [17/274] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:12.535 [18/274] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.535 [19/274] Linking static target lib/librte_telemetry.a 00:02:12.535 [20/274] Linking target lib/librte_log.so.24.1 00:02:12.793 [21/274] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:13.051 [22/274] Linking target lib/librte_kvargs.so.24.1 00:02:13.051 [23/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.051 [24/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.051 [25/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.051 [26/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.051 [27/274] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:13.051 [28/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.310 [29/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.310 [30/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.310 [31/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.569 [32/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.569 [33/274] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.569 [34/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.569 [35/274] Linking target lib/librte_telemetry.so.24.1 00:02:13.892 [36/274] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:13.892 [37/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.892 [38/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.892 [39/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.892 [40/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.892 [41/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.150 [42/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.150 [43/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.150 [44/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.409 [45/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.409 [46/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.409 [47/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.409 [48/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.409 [49/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.669 [50/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.669 [51/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.927 [52/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.927 [53/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:14.927 [54/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.186 [55/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.186 [56/274] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.186 [57/274] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.444 [58/274] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.444 [59/274] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.444 [60/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.444 [61/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.703 [62/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.703 [63/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.963 [64/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.963 [65/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.963 [66/274] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.221 [67/274] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.221 [68/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.221 [69/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.481 [70/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.481 [71/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.481 [72/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.481 [73/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.739 [74/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.739 [75/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.739 [76/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.739 [77/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.998 [78/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.998 [79/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.998 [80/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.257 [81/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.257 [82/274] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:17.257 [83/274] Linking static target lib/librte_ring.a 00:02:17.257 [84/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.257 [85/274] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.515 [86/274] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.515 [87/274] Linking static target lib/librte_eal.a 00:02:17.515 [88/274] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.515 [89/274] Linking static target lib/librte_rcu.a 00:02:17.515 [90/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.515 [91/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.773 [92/274] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.773 [93/274] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.032 [94/274] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.032 [95/274] Linking static target lib/librte_mempool.a 00:02:18.032 [96/274] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.032 [97/274] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.032 [98/274] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.032 [99/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.290 [100/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.290 [101/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.290 [102/274] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.290 [103/274] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.290 [104/274] Linking static target lib/librte_mbuf.a 00:02:18.549 [105/274] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.549 [106/274] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.549 [107/274] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.807 [108/274] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.807 [109/274] Linking static target lib/librte_meter.a 00:02:19.065 [110/274] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:19.065 [111/274] Linking static target lib/librte_net.a 00:02:19.065 [112/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.065 [113/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.065 [114/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.323 [115/274] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.323 [116/274] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.323 [117/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.323 [118/274] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.581 [119/274] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.839 [120/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.405 [121/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.405 [122/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.405 [123/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.405 [124/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.405 [125/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.405 [126/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.405 [127/274] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.405 [128/274] Linking static target lib/librte_pci.a 00:02:20.663 [129/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.663 [130/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.663 [131/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.663 [132/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.922 [133/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.922 [134/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.922 [135/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.922 [136/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.922 [137/274] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.922 [138/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.922 [139/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.922 [140/274] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.922 [141/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.922 [142/274] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:20.922 [143/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.922 [144/274] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.180 [145/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.180 [146/274] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.180 [147/274] Linking static target lib/librte_ethdev.a 00:02:21.439 [148/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:21.439 [149/274] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:21.710 [150/274] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:21.710 [151/274] Linking static target lib/librte_cmdline.a 00:02:21.710 [152/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:21.710 [153/274] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:21.710 [154/274] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:21.710 [155/274] Linking static target lib/librte_timer.a 00:02:21.975 [156/274] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.975 [157/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:21.975 [158/274] Linking static target lib/librte_hash.a 00:02:22.234 [159/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.234 [160/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.507 [161/274] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.507 [162/274] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.507 [163/274] Linking static target lib/librte_compressdev.a 00:02:22.507 [164/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.507 [165/274] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:22.790 [166/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.790 [167/274] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.791 [168/274] Linking static target lib/librte_dmadev.a 00:02:23.048 [169/274] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:23.048 [170/274] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.048 [171/274] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:23.048 [172/274] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:23.048 [173/274] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:23.048 [174/274] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.048 [175/274] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.048 [176/274] Linking static target lib/librte_cryptodev.a 00:02:23.306 [177/274] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.564 [178/274] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:23.564 [179/274] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:23.564 [180/274] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.564 [181/274] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:23.564 [182/274] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:23.822 [183/274] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:23.822 [184/274] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:23.822 [185/274] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.140 [186/274] Linking static target lib/librte_power.a 00:02:24.140 [187/274] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.140 [188/274] Linking static target lib/librte_reorder.a 00:02:24.140 [189/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:24.140 [190/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:24.140 [191/274] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:24.140 [192/274] Linking static target lib/librte_stack.a 00:02:24.140 [193/274] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.399 [194/274] Linking static target lib/librte_security.a 00:02:24.399 [195/274] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.399 [196/274] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.656 [197/274] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.656 [198/274] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.656 [199/274] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.914 [200/274] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.914 [201/274] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.172 [202/274] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.172 [203/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:25.172 [204/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:25.431 [205/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:25.431 [206/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:25.431 [207/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:25.689 [208/274] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:25.689 [209/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:25.689 [210/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:25.948 [211/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:25.948 [212/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:25.948 [213/274] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:25.948 [214/274] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:25.948 [215/274] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:25.948 [216/274] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:25.948 [217/274] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:26.207 [218/274] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:26.207 [219/274] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.207 [220/274] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:26.207 [221/274] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:26.207 [222/274] Linking static target drivers/librte_bus_vdev.a 00:02:26.207 [223/274] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.207 [224/274] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:26.207 [225/274] Linking static target drivers/librte_bus_pci.a 00:02:26.207 [226/274] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:26.207 [227/274] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:26.466 [228/274] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.466 [229/274] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:26.466 [230/274] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.466 [231/274] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:26.466 [232/274] Linking static target drivers/librte_mempool_ring.a 00:02:26.724 [233/274] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.659 [234/274] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:27.659 [235/274] Linking static target lib/librte_vhost.a 00:02:27.918 [236/274] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.918 [237/274] Linking target lib/librte_eal.so.24.1 00:02:28.176 [238/274] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:28.176 [239/274] Linking target lib/librte_pci.so.24.1 00:02:28.176 [240/274] Linking target lib/librte_meter.so.24.1 00:02:28.176 [241/274] Linking target lib/librte_timer.so.24.1 00:02:28.176 [242/274] Linking target lib/librte_stack.so.24.1 00:02:28.176 [243/274] Linking target drivers/librte_bus_vdev.so.24.1 00:02:28.176 [244/274] Linking target lib/librte_ring.so.24.1 00:02:28.176 [245/274] Linking target lib/librte_dmadev.so.24.1 00:02:28.434 [246/274] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:28.434 [247/274] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:28.434 [248/274] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:28.434 [249/274] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:28.434 [250/274] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:28.434 [251/274] Linking target lib/librte_rcu.so.24.1 00:02:28.434 [252/274] Linking target drivers/librte_bus_pci.so.24.1 00:02:28.434 [253/274] Linking target lib/librte_mempool.so.24.1 00:02:28.434 [254/274] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.693 [255/274] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:28.693 [256/274] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:28.693 [257/274] Linking target drivers/librte_mempool_ring.so.24.1 00:02:28.693 [258/274] Linking target lib/librte_mbuf.so.24.1 00:02:28.693 [259/274] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:28.952 [260/274] Linking target lib/librte_reorder.so.24.1 00:02:28.952 [261/274] Linking target lib/librte_compressdev.so.24.1 00:02:28.952 [262/274] Linking target lib/librte_net.so.24.1 00:02:28.952 [263/274] Linking target lib/librte_cryptodev.so.24.1 00:02:28.952 [264/274] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:28.952 [265/274] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:28.952 [266/274] Linking target lib/librte_hash.so.24.1 00:02:28.952 [267/274] Linking target lib/librte_security.so.24.1 00:02:28.952 [268/274] Linking target lib/librte_cmdline.so.24.1 00:02:29.211 [269/274] Linking target lib/librte_ethdev.so.24.1 00:02:29.211 [270/274] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:29.211 [271/274] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.211 [272/274] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:29.211 [273/274] Linking target lib/librte_power.so.24.1 00:02:29.470 [274/274] Linking target lib/librte_vhost.so.24.1 00:02:29.470 INFO: autodetecting backend as ninja 00:02:29.470 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:30.406 CC lib/log/log_flags.o 00:02:30.406 CC lib/log/log.o 00:02:30.406 CC lib/log/log_deprecated.o 00:02:30.406 CC lib/ut_mock/mock.o 00:02:30.406 CC lib/ut/ut.o 00:02:30.665 LIB libspdk_ut.a 00:02:30.665 LIB libspdk_ut_mock.a 00:02:30.665 LIB libspdk_log.a 00:02:30.665 SO libspdk_ut.so.2.0 00:02:30.665 SO libspdk_ut_mock.so.6.0 00:02:30.665 SO libspdk_log.so.7.0 00:02:30.665 SYMLINK libspdk_ut_mock.so 00:02:30.665 SYMLINK libspdk_ut.so 00:02:30.665 SYMLINK libspdk_log.so 00:02:30.924 CXX lib/trace_parser/trace.o 00:02:30.924 CC lib/util/base64.o 00:02:30.924 CC lib/util/bit_array.o 00:02:30.924 CC lib/dma/dma.o 00:02:30.924 CC lib/util/cpuset.o 00:02:30.924 CC lib/util/crc16.o 00:02:30.924 CC lib/util/crc32.o 00:02:30.924 CC lib/util/crc32c.o 00:02:30.924 CC lib/ioat/ioat.o 00:02:31.183 CC lib/vfio_user/host/vfio_user_pci.o 00:02:31.183 CC lib/vfio_user/host/vfio_user.o 00:02:31.183 CC lib/util/crc32_ieee.o 00:02:31.183 CC lib/util/crc64.o 00:02:31.183 CC lib/util/dif.o 00:02:31.183 LIB libspdk_dma.a 00:02:31.183 CC lib/util/fd.o 00:02:31.183 SO libspdk_dma.so.4.0 00:02:31.183 CC lib/util/file.o 00:02:31.442 CC lib/util/hexlify.o 00:02:31.442 CC lib/util/iov.o 00:02:31.442 LIB libspdk_ioat.a 00:02:31.442 SYMLINK libspdk_dma.so 00:02:31.442 CC lib/util/math.o 00:02:31.442 CC lib/util/pipe.o 00:02:31.442 SO libspdk_ioat.so.7.0 00:02:31.442 LIB libspdk_vfio_user.a 00:02:31.442 CC lib/util/strerror_tls.o 00:02:31.442 SYMLINK libspdk_ioat.so 00:02:31.442 CC lib/util/string.o 00:02:31.442 SO libspdk_vfio_user.so.5.0 00:02:31.442 CC lib/util/uuid.o 00:02:31.442 SYMLINK libspdk_vfio_user.so 00:02:31.442 CC lib/util/fd_group.o 00:02:31.442 CC lib/util/xor.o 00:02:31.442 CC lib/util/zipf.o 00:02:31.701 LIB libspdk_util.a 00:02:31.960 SO libspdk_util.so.9.0 00:02:31.960 LIB libspdk_trace_parser.a 00:02:31.960 SO libspdk_trace_parser.so.5.0 00:02:31.961 SYMLINK libspdk_util.so 00:02:32.219 SYMLINK libspdk_trace_parser.so 00:02:32.219 CC lib/conf/conf.o 00:02:32.219 CC lib/json/json_parse.o 00:02:32.219 CC lib/rdma/rdma_verbs.o 00:02:32.219 CC lib/rdma/common.o 00:02:32.219 CC lib/json/json_util.o 00:02:32.219 CC lib/json/json_write.o 00:02:32.219 CC lib/idxd/idxd.o 00:02:32.219 CC lib/idxd/idxd_user.o 00:02:32.219 CC lib/vmd/vmd.o 00:02:32.219 CC lib/env_dpdk/env.o 00:02:32.478 CC lib/env_dpdk/memory.o 00:02:32.478 CC lib/env_dpdk/pci.o 00:02:32.478 CC lib/env_dpdk/init.o 00:02:32.478 LIB libspdk_conf.a 00:02:32.478 CC lib/env_dpdk/threads.o 00:02:32.478 SO libspdk_conf.so.6.0 00:02:32.478 LIB libspdk_rdma.a 00:02:32.478 LIB libspdk_json.a 00:02:32.478 SO libspdk_rdma.so.6.0 00:02:32.478 SO libspdk_json.so.6.0 00:02:32.478 SYMLINK libspdk_conf.so 00:02:32.739 CC lib/vmd/led.o 00:02:32.739 SYMLINK libspdk_json.so 00:02:32.739 SYMLINK libspdk_rdma.so 00:02:32.739 CC lib/env_dpdk/pci_ioat.o 00:02:32.739 CC lib/env_dpdk/pci_virtio.o 00:02:32.739 CC lib/env_dpdk/pci_vmd.o 00:02:32.739 CC lib/env_dpdk/pci_idxd.o 00:02:32.739 CC lib/env_dpdk/pci_event.o 00:02:32.739 CC lib/env_dpdk/sigbus_handler.o 00:02:32.739 CC lib/jsonrpc/jsonrpc_server.o 00:02:32.999 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:32.999 LIB libspdk_idxd.a 00:02:32.999 LIB libspdk_vmd.a 00:02:32.999 SO libspdk_idxd.so.12.0 00:02:32.999 CC lib/jsonrpc/jsonrpc_client.o 00:02:32.999 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:32.999 SO libspdk_vmd.so.6.0 00:02:32.999 CC lib/env_dpdk/pci_dpdk.o 00:02:32.999 SYMLINK libspdk_idxd.so 00:02:32.999 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:32.999 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:32.999 SYMLINK libspdk_vmd.so 00:02:33.258 LIB libspdk_jsonrpc.a 00:02:33.258 SO libspdk_jsonrpc.so.6.0 00:02:33.258 SYMLINK libspdk_jsonrpc.so 00:02:33.517 CC lib/rpc/rpc.o 00:02:33.776 LIB libspdk_env_dpdk.a 00:02:33.776 SO libspdk_env_dpdk.so.14.0 00:02:33.776 LIB libspdk_rpc.a 00:02:33.776 SO libspdk_rpc.so.6.0 00:02:34.034 SYMLINK libspdk_rpc.so 00:02:34.034 SYMLINK libspdk_env_dpdk.so 00:02:34.034 CC lib/keyring/keyring.o 00:02:34.034 CC lib/keyring/keyring_rpc.o 00:02:34.034 CC lib/notify/notify.o 00:02:34.034 CC lib/trace/trace_flags.o 00:02:34.034 CC lib/notify/notify_rpc.o 00:02:34.034 CC lib/trace/trace_rpc.o 00:02:34.034 CC lib/trace/trace.o 00:02:34.293 LIB libspdk_notify.a 00:02:34.552 SO libspdk_notify.so.6.0 00:02:34.552 LIB libspdk_trace.a 00:02:34.552 LIB libspdk_keyring.a 00:02:34.552 SO libspdk_trace.so.10.0 00:02:34.552 SYMLINK libspdk_notify.so 00:02:34.552 SO libspdk_keyring.so.1.0 00:02:34.552 SYMLINK libspdk_trace.so 00:02:34.552 SYMLINK libspdk_keyring.so 00:02:34.811 CC lib/sock/sock.o 00:02:34.811 CC lib/sock/sock_rpc.o 00:02:34.811 CC lib/thread/thread.o 00:02:34.811 CC lib/thread/iobuf.o 00:02:35.379 LIB libspdk_sock.a 00:02:35.379 SO libspdk_sock.so.9.0 00:02:35.379 SYMLINK libspdk_sock.so 00:02:35.639 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:35.639 CC lib/nvme/nvme_ctrlr.o 00:02:35.639 CC lib/nvme/nvme_fabric.o 00:02:35.639 CC lib/nvme/nvme_ns_cmd.o 00:02:35.639 CC lib/nvme/nvme_ns.o 00:02:35.639 CC lib/nvme/nvme_pcie_common.o 00:02:35.639 CC lib/nvme/nvme_pcie.o 00:02:35.639 CC lib/nvme/nvme.o 00:02:35.639 CC lib/nvme/nvme_qpair.o 00:02:36.584 LIB libspdk_thread.a 00:02:36.584 SO libspdk_thread.so.10.0 00:02:36.584 CC lib/nvme/nvme_quirks.o 00:02:36.584 CC lib/nvme/nvme_transport.o 00:02:36.584 SYMLINK libspdk_thread.so 00:02:36.584 CC lib/nvme/nvme_discovery.o 00:02:36.584 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:36.584 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:36.584 CC lib/nvme/nvme_tcp.o 00:02:36.584 CC lib/nvme/nvme_opal.o 00:02:36.843 CC lib/nvme/nvme_io_msg.o 00:02:36.843 CC lib/nvme/nvme_poll_group.o 00:02:37.103 CC lib/nvme/nvme_zns.o 00:02:37.103 CC lib/nvme/nvme_stubs.o 00:02:37.364 CC lib/nvme/nvme_auth.o 00:02:37.364 CC lib/accel/accel.o 00:02:37.364 CC lib/blob/blobstore.o 00:02:37.364 CC lib/init/json_config.o 00:02:37.364 CC lib/init/subsystem.o 00:02:37.627 CC lib/init/subsystem_rpc.o 00:02:37.627 CC lib/blob/request.o 00:02:37.627 CC lib/blob/zeroes.o 00:02:37.627 CC lib/blob/blob_bs_dev.o 00:02:37.627 CC lib/init/rpc.o 00:02:37.894 CC lib/accel/accel_rpc.o 00:02:37.894 CC lib/virtio/virtio.o 00:02:37.894 LIB libspdk_init.a 00:02:37.894 CC lib/accel/accel_sw.o 00:02:37.894 CC lib/virtio/virtio_vhost_user.o 00:02:37.894 CC lib/nvme/nvme_cuse.o 00:02:37.894 SO libspdk_init.so.5.0 00:02:38.163 CC lib/virtio/virtio_vfio_user.o 00:02:38.163 SYMLINK libspdk_init.so 00:02:38.163 CC lib/nvme/nvme_rdma.o 00:02:38.163 CC lib/virtio/virtio_pci.o 00:02:38.434 CC lib/event/app.o 00:02:38.434 CC lib/event/app_rpc.o 00:02:38.434 CC lib/event/reactor.o 00:02:38.434 CC lib/event/scheduler_static.o 00:02:38.434 CC lib/event/log_rpc.o 00:02:38.434 LIB libspdk_accel.a 00:02:38.434 SO libspdk_accel.so.15.0 00:02:38.434 SYMLINK libspdk_accel.so 00:02:38.700 LIB libspdk_virtio.a 00:02:38.700 SO libspdk_virtio.so.7.0 00:02:38.700 SYMLINK libspdk_virtio.so 00:02:38.700 CC lib/bdev/bdev_rpc.o 00:02:38.700 CC lib/bdev/bdev_zone.o 00:02:38.700 CC lib/bdev/bdev.o 00:02:38.700 CC lib/bdev/scsi_nvme.o 00:02:38.700 CC lib/bdev/part.o 00:02:38.700 LIB libspdk_event.a 00:02:38.966 SO libspdk_event.so.13.0 00:02:38.966 SYMLINK libspdk_event.so 00:02:39.535 LIB libspdk_nvme.a 00:02:39.794 SO libspdk_nvme.so.13.0 00:02:40.053 SYMLINK libspdk_nvme.so 00:02:40.330 LIB libspdk_blob.a 00:02:40.330 SO libspdk_blob.so.11.0 00:02:40.591 SYMLINK libspdk_blob.so 00:02:40.850 CC lib/lvol/lvol.o 00:02:40.850 CC lib/blobfs/blobfs.o 00:02:40.850 CC lib/blobfs/tree.o 00:02:41.418 LIB libspdk_bdev.a 00:02:41.418 SO libspdk_bdev.so.15.0 00:02:41.418 SYMLINK libspdk_bdev.so 00:02:41.677 LIB libspdk_blobfs.a 00:02:41.677 SO libspdk_blobfs.so.10.0 00:02:41.677 LIB libspdk_lvol.a 00:02:41.677 SO libspdk_lvol.so.10.0 00:02:41.677 CC lib/ublk/ublk.o 00:02:41.677 CC lib/ublk/ublk_rpc.o 00:02:41.677 CC lib/nvmf/ctrlr.o 00:02:41.677 CC lib/nvmf/ctrlr_discovery.o 00:02:41.677 CC lib/nvmf/ctrlr_bdev.o 00:02:41.677 CC lib/scsi/dev.o 00:02:41.677 CC lib/nbd/nbd.o 00:02:41.677 SYMLINK libspdk_blobfs.so 00:02:41.677 CC lib/scsi/lun.o 00:02:41.677 CC lib/ftl/ftl_core.o 00:02:41.677 SYMLINK libspdk_lvol.so 00:02:41.677 CC lib/nvmf/subsystem.o 00:02:41.935 CC lib/scsi/port.o 00:02:41.935 CC lib/scsi/scsi.o 00:02:42.193 CC lib/nvmf/nvmf.o 00:02:42.193 CC lib/ftl/ftl_init.o 00:02:42.193 CC lib/scsi/scsi_bdev.o 00:02:42.193 CC lib/nvmf/nvmf_rpc.o 00:02:42.193 CC lib/nbd/nbd_rpc.o 00:02:42.193 CC lib/nvmf/transport.o 00:02:42.452 LIB libspdk_ublk.a 00:02:42.452 CC lib/nvmf/tcp.o 00:02:42.452 CC lib/ftl/ftl_layout.o 00:02:42.452 SO libspdk_ublk.so.3.0 00:02:42.452 SYMLINK libspdk_ublk.so 00:02:42.452 CC lib/nvmf/stubs.o 00:02:42.452 LIB libspdk_nbd.a 00:02:42.711 SO libspdk_nbd.so.7.0 00:02:42.711 CC lib/scsi/scsi_pr.o 00:02:42.711 CC lib/ftl/ftl_debug.o 00:02:42.711 SYMLINK libspdk_nbd.so 00:02:42.711 CC lib/scsi/scsi_rpc.o 00:02:42.970 CC lib/ftl/ftl_io.o 00:02:42.970 CC lib/ftl/ftl_sb.o 00:02:42.970 CC lib/ftl/ftl_l2p.o 00:02:42.970 CC lib/ftl/ftl_l2p_flat.o 00:02:42.970 CC lib/scsi/task.o 00:02:42.970 CC lib/ftl/ftl_nv_cache.o 00:02:43.229 CC lib/ftl/ftl_band.o 00:02:43.229 CC lib/ftl/ftl_band_ops.o 00:02:43.229 CC lib/ftl/ftl_writer.o 00:02:43.229 CC lib/ftl/ftl_rq.o 00:02:43.229 CC lib/ftl/ftl_reloc.o 00:02:43.229 LIB libspdk_scsi.a 00:02:43.229 CC lib/ftl/ftl_l2p_cache.o 00:02:43.489 SO libspdk_scsi.so.9.0 00:02:43.489 CC lib/ftl/ftl_p2l.o 00:02:43.489 CC lib/ftl/mngt/ftl_mngt.o 00:02:43.489 SYMLINK libspdk_scsi.so 00:02:43.489 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:43.489 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:43.489 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:43.489 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:43.748 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:43.748 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:43.748 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:43.748 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:43.748 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:44.007 CC lib/nvmf/mdns_server.o 00:02:44.007 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:44.007 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:44.007 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.007 CC lib/iscsi/conn.o 00:02:44.007 CC lib/iscsi/init_grp.o 00:02:44.007 CC lib/iscsi/iscsi.o 00:02:44.007 CC lib/ftl/utils/ftl_conf.o 00:02:44.007 CC lib/ftl/utils/ftl_md.o 00:02:44.007 CC lib/vhost/vhost.o 00:02:44.265 CC lib/vhost/vhost_rpc.o 00:02:44.265 CC lib/vhost/vhost_scsi.o 00:02:44.265 CC lib/vhost/vhost_blk.o 00:02:44.265 CC lib/iscsi/md5.o 00:02:44.265 CC lib/nvmf/rdma.o 00:02:44.524 CC lib/ftl/utils/ftl_mempool.o 00:02:44.524 CC lib/iscsi/param.o 00:02:44.524 CC lib/iscsi/portal_grp.o 00:02:44.783 CC lib/iscsi/tgt_node.o 00:02:44.783 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.783 CC lib/ftl/utils/ftl_property.o 00:02:44.783 CC lib/vhost/rte_vhost_user.o 00:02:44.783 CC lib/iscsi/iscsi_subsystem.o 00:02:44.783 CC lib/iscsi/iscsi_rpc.o 00:02:45.041 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:45.041 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:45.041 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:45.301 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:45.301 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:45.301 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:45.301 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:45.301 CC lib/nvmf/auth.o 00:02:45.301 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:45.301 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:45.301 CC lib/iscsi/task.o 00:02:45.559 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:45.559 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:45.559 CC lib/ftl/base/ftl_base_dev.o 00:02:45.559 CC lib/ftl/base/ftl_base_bdev.o 00:02:45.559 CC lib/ftl/ftl_trace.o 00:02:45.836 LIB libspdk_iscsi.a 00:02:45.836 SO libspdk_iscsi.so.8.0 00:02:45.836 LIB libspdk_ftl.a 00:02:46.093 LIB libspdk_vhost.a 00:02:46.093 SYMLINK libspdk_iscsi.so 00:02:46.093 SO libspdk_vhost.so.8.0 00:02:46.093 SO libspdk_ftl.so.9.0 00:02:46.093 SYMLINK libspdk_vhost.so 00:02:46.658 LIB libspdk_nvmf.a 00:02:46.658 SYMLINK libspdk_ftl.so 00:02:46.658 SO libspdk_nvmf.so.18.0 00:02:46.916 SYMLINK libspdk_nvmf.so 00:02:47.174 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.431 CC module/sock/posix/posix.o 00:02:47.431 CC module/accel/ioat/accel_ioat.o 00:02:47.431 CC module/accel/dsa/accel_dsa.o 00:02:47.431 CC module/accel/error/accel_error.o 00:02:47.431 CC module/accel/iaa/accel_iaa.o 00:02:47.431 CC module/sock/uring/uring.o 00:02:47.431 CC module/blob/bdev/blob_bdev.o 00:02:47.431 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.431 CC module/keyring/file/keyring.o 00:02:47.431 LIB libspdk_env_dpdk_rpc.a 00:02:47.431 SO libspdk_env_dpdk_rpc.so.6.0 00:02:47.431 CC module/accel/error/accel_error_rpc.o 00:02:47.431 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.431 CC module/keyring/file/keyring_rpc.o 00:02:47.431 CC module/accel/ioat/accel_ioat_rpc.o 00:02:47.431 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.688 LIB libspdk_scheduler_dynamic.a 00:02:47.688 SO libspdk_scheduler_dynamic.so.4.0 00:02:47.688 LIB libspdk_accel_error.a 00:02:47.688 SYMLINK libspdk_scheduler_dynamic.so 00:02:47.688 LIB libspdk_accel_ioat.a 00:02:47.688 SO libspdk_accel_error.so.2.0 00:02:47.688 SO libspdk_accel_ioat.so.6.0 00:02:47.688 LIB libspdk_accel_iaa.a 00:02:47.688 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.688 LIB libspdk_blob_bdev.a 00:02:47.688 SO libspdk_accel_iaa.so.3.0 00:02:47.688 LIB libspdk_keyring_file.a 00:02:47.688 SYMLINK libspdk_accel_error.so 00:02:47.688 SYMLINK libspdk_accel_ioat.so 00:02:47.688 SO libspdk_blob_bdev.so.11.0 00:02:47.688 SO libspdk_keyring_file.so.1.0 00:02:47.946 SYMLINK libspdk_accel_iaa.so 00:02:47.946 SYMLINK libspdk_blob_bdev.so 00:02:47.946 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.946 SYMLINK libspdk_keyring_file.so 00:02:47.946 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.946 LIB libspdk_accel_dsa.a 00:02:47.946 SO libspdk_accel_dsa.so.5.0 00:02:47.946 LIB libspdk_sock_uring.a 00:02:47.946 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.946 SO libspdk_sock_uring.so.5.0 00:02:47.946 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.946 SYMLINK libspdk_accel_dsa.so 00:02:48.204 LIB libspdk_scheduler_gscheduler.a 00:02:48.204 LIB libspdk_sock_posix.a 00:02:48.204 SO libspdk_scheduler_gscheduler.so.4.0 00:02:48.204 SO libspdk_sock_posix.so.6.0 00:02:48.204 SYMLINK libspdk_sock_uring.so 00:02:48.204 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:48.204 CC module/bdev/gpt/gpt.o 00:02:48.204 CC module/bdev/delay/vbdev_delay.o 00:02:48.204 CC module/bdev/error/vbdev_error.o 00:02:48.204 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.204 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.204 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.204 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.204 SYMLINK libspdk_scheduler_gscheduler.so 00:02:48.204 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.204 SYMLINK libspdk_sock_posix.so 00:02:48.204 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.204 CC module/bdev/malloc/bdev_malloc.o 00:02:48.462 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.462 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.462 LIB libspdk_bdev_gpt.a 00:02:48.462 LIB libspdk_bdev_error.a 00:02:48.462 LIB libspdk_blobfs_bdev.a 00:02:48.462 SO libspdk_bdev_gpt.so.6.0 00:02:48.462 SO libspdk_blobfs_bdev.so.6.0 00:02:48.462 SO libspdk_bdev_error.so.6.0 00:02:48.462 LIB libspdk_bdev_delay.a 00:02:48.462 SYMLINK libspdk_bdev_gpt.so 00:02:48.462 SO libspdk_bdev_delay.so.6.0 00:02:48.462 SYMLINK libspdk_bdev_error.so 00:02:48.720 CC module/bdev/null/bdev_null.o 00:02:48.720 SYMLINK libspdk_blobfs_bdev.so 00:02:48.720 CC module/bdev/nvme/bdev_nvme.o 00:02:48.720 SYMLINK libspdk_bdev_delay.so 00:02:48.720 CC module/bdev/null/bdev_null_rpc.o 00:02:48.720 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.720 CC module/bdev/split/vbdev_split.o 00:02:48.720 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.720 LIB libspdk_bdev_lvol.a 00:02:48.720 CC module/bdev/raid/bdev_raid.o 00:02:48.978 SO libspdk_bdev_lvol.so.6.0 00:02:48.978 CC module/bdev/uring/bdev_uring.o 00:02:48.978 LIB libspdk_bdev_malloc.a 00:02:48.978 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.978 SO libspdk_bdev_malloc.so.6.0 00:02:48.978 SYMLINK libspdk_bdev_lvol.so 00:02:48.978 LIB libspdk_bdev_null.a 00:02:48.978 SO libspdk_bdev_null.so.6.0 00:02:48.978 SYMLINK libspdk_bdev_malloc.so 00:02:48.978 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.978 CC module/bdev/uring/bdev_uring_rpc.o 00:02:48.978 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.978 SYMLINK libspdk_bdev_null.so 00:02:49.239 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.239 LIB libspdk_bdev_split.a 00:02:49.239 CC module/bdev/aio/bdev_aio.o 00:02:49.239 SO libspdk_bdev_split.so.6.0 00:02:49.239 LIB libspdk_bdev_passthru.a 00:02:49.239 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.239 SYMLINK libspdk_bdev_split.so 00:02:49.239 SO libspdk_bdev_passthru.so.6.0 00:02:49.239 CC module/bdev/raid/raid0.o 00:02:49.239 CC module/bdev/raid/raid1.o 00:02:49.498 CC module/bdev/ftl/bdev_ftl.o 00:02:49.498 LIB libspdk_bdev_uring.a 00:02:49.498 LIB libspdk_bdev_zone_block.a 00:02:49.498 SYMLINK libspdk_bdev_passthru.so 00:02:49.498 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.498 SO libspdk_bdev_uring.so.6.0 00:02:49.498 SO libspdk_bdev_zone_block.so.6.0 00:02:49.498 SYMLINK libspdk_bdev_uring.so 00:02:49.498 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.498 SYMLINK libspdk_bdev_zone_block.so 00:02:49.498 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.498 CC module/bdev/raid/concat.o 00:02:49.498 CC module/bdev/nvme/nvme_rpc.o 00:02:49.757 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.757 LIB libspdk_bdev_ftl.a 00:02:49.757 LIB libspdk_bdev_aio.a 00:02:49.757 SO libspdk_bdev_ftl.so.6.0 00:02:49.757 SO libspdk_bdev_aio.so.6.0 00:02:49.757 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.757 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.757 SYMLINK libspdk_bdev_ftl.so 00:02:49.757 SYMLINK libspdk_bdev_aio.so 00:02:49.757 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.757 LIB libspdk_bdev_raid.a 00:02:49.757 CC module/bdev/nvme/vbdev_opal.o 00:02:49.757 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.757 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.757 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:49.757 SO libspdk_bdev_raid.so.6.0 00:02:50.015 SYMLINK libspdk_bdev_raid.so 00:02:50.015 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:50.273 LIB libspdk_bdev_iscsi.a 00:02:50.273 SO libspdk_bdev_iscsi.so.6.0 00:02:50.273 SYMLINK libspdk_bdev_iscsi.so 00:02:50.273 LIB libspdk_bdev_virtio.a 00:02:50.273 SO libspdk_bdev_virtio.so.6.0 00:02:50.531 SYMLINK libspdk_bdev_virtio.so 00:02:51.095 LIB libspdk_bdev_nvme.a 00:02:51.095 SO libspdk_bdev_nvme.so.7.0 00:02:51.352 SYMLINK libspdk_bdev_nvme.so 00:02:51.610 CC module/event/subsystems/sock/sock.o 00:02:51.610 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.610 CC module/event/subsystems/vmd/vmd.o 00:02:51.610 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.610 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.610 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.610 CC module/event/subsystems/keyring/keyring.o 00:02:51.610 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.869 LIB libspdk_event_keyring.a 00:02:51.869 LIB libspdk_event_scheduler.a 00:02:51.869 LIB libspdk_event_vhost_blk.a 00:02:51.869 SO libspdk_event_scheduler.so.4.0 00:02:51.869 LIB libspdk_event_sock.a 00:02:51.869 SO libspdk_event_keyring.so.1.0 00:02:51.869 LIB libspdk_event_iobuf.a 00:02:51.869 LIB libspdk_event_vmd.a 00:02:51.869 SO libspdk_event_vhost_blk.so.3.0 00:02:51.869 SO libspdk_event_sock.so.5.0 00:02:51.869 SO libspdk_event_iobuf.so.3.0 00:02:51.869 SO libspdk_event_vmd.so.6.0 00:02:51.869 SYMLINK libspdk_event_keyring.so 00:02:51.869 SYMLINK libspdk_event_scheduler.so 00:02:51.869 SYMLINK libspdk_event_vhost_blk.so 00:02:51.869 SYMLINK libspdk_event_sock.so 00:02:52.126 SYMLINK libspdk_event_iobuf.so 00:02:52.126 SYMLINK libspdk_event_vmd.so 00:02:52.386 CC module/event/subsystems/accel/accel.o 00:02:52.386 LIB libspdk_event_accel.a 00:02:52.386 SO libspdk_event_accel.so.6.0 00:02:52.645 SYMLINK libspdk_event_accel.so 00:02:52.903 CC module/event/subsystems/bdev/bdev.o 00:02:52.903 LIB libspdk_event_bdev.a 00:02:53.162 SO libspdk_event_bdev.so.6.0 00:02:53.162 SYMLINK libspdk_event_bdev.so 00:02:53.421 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.421 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.421 CC module/event/subsystems/scsi/scsi.o 00:02:53.421 CC module/event/subsystems/ublk/ublk.o 00:02:53.421 CC module/event/subsystems/nbd/nbd.o 00:02:53.421 LIB libspdk_event_ublk.a 00:02:53.421 LIB libspdk_event_nbd.a 00:02:53.421 LIB libspdk_event_scsi.a 00:02:53.421 SO libspdk_event_ublk.so.3.0 00:02:53.679 SO libspdk_event_scsi.so.6.0 00:02:53.679 SO libspdk_event_nbd.so.6.0 00:02:53.679 LIB libspdk_event_nvmf.a 00:02:53.679 SYMLINK libspdk_event_ublk.so 00:02:53.679 SYMLINK libspdk_event_nbd.so 00:02:53.679 SO libspdk_event_nvmf.so.6.0 00:02:53.679 SYMLINK libspdk_event_scsi.so 00:02:53.679 SYMLINK libspdk_event_nvmf.so 00:02:53.937 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.937 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.937 LIB libspdk_event_vhost_scsi.a 00:02:53.937 SO libspdk_event_vhost_scsi.so.3.0 00:02:53.937 LIB libspdk_event_iscsi.a 00:02:54.195 SO libspdk_event_iscsi.so.6.0 00:02:54.195 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.195 SYMLINK libspdk_event_iscsi.so 00:02:54.195 SO libspdk.so.6.0 00:02:54.195 SYMLINK libspdk.so 00:02:54.452 TEST_HEADER include/spdk/accel.h 00:02:54.452 CXX app/trace/trace.o 00:02:54.452 TEST_HEADER include/spdk/accel_module.h 00:02:54.452 TEST_HEADER include/spdk/assert.h 00:02:54.452 TEST_HEADER include/spdk/barrier.h 00:02:54.452 TEST_HEADER include/spdk/base64.h 00:02:54.452 TEST_HEADER include/spdk/bdev.h 00:02:54.452 TEST_HEADER include/spdk/bdev_module.h 00:02:54.452 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.452 TEST_HEADER include/spdk/bit_array.h 00:02:54.452 TEST_HEADER include/spdk/bit_pool.h 00:02:54.452 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.452 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.452 TEST_HEADER include/spdk/blobfs.h 00:02:54.710 TEST_HEADER include/spdk/blob.h 00:02:54.710 TEST_HEADER include/spdk/conf.h 00:02:54.710 TEST_HEADER include/spdk/config.h 00:02:54.710 TEST_HEADER include/spdk/cpuset.h 00:02:54.710 TEST_HEADER include/spdk/crc16.h 00:02:54.710 TEST_HEADER include/spdk/crc32.h 00:02:54.710 TEST_HEADER include/spdk/crc64.h 00:02:54.710 TEST_HEADER include/spdk/dif.h 00:02:54.710 TEST_HEADER include/spdk/dma.h 00:02:54.710 TEST_HEADER include/spdk/endian.h 00:02:54.710 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.710 TEST_HEADER include/spdk/env.h 00:02:54.710 TEST_HEADER include/spdk/event.h 00:02:54.710 TEST_HEADER include/spdk/fd_group.h 00:02:54.710 TEST_HEADER include/spdk/fd.h 00:02:54.710 TEST_HEADER include/spdk/file.h 00:02:54.710 TEST_HEADER include/spdk/ftl.h 00:02:54.710 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.710 TEST_HEADER include/spdk/hexlify.h 00:02:54.710 TEST_HEADER include/spdk/histogram_data.h 00:02:54.710 TEST_HEADER include/spdk/idxd.h 00:02:54.710 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.710 TEST_HEADER include/spdk/init.h 00:02:54.710 TEST_HEADER include/spdk/ioat.h 00:02:54.710 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.710 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.710 TEST_HEADER include/spdk/json.h 00:02:54.710 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.710 CC test/event/event_perf/event_perf.o 00:02:54.710 TEST_HEADER include/spdk/keyring.h 00:02:54.710 TEST_HEADER include/spdk/keyring_module.h 00:02:54.710 TEST_HEADER include/spdk/likely.h 00:02:54.710 TEST_HEADER include/spdk/log.h 00:02:54.710 CC test/accel/dif/dif.o 00:02:54.710 TEST_HEADER include/spdk/lvol.h 00:02:54.710 TEST_HEADER include/spdk/memory.h 00:02:54.710 TEST_HEADER include/spdk/mmio.h 00:02:54.710 TEST_HEADER include/spdk/nbd.h 00:02:54.710 TEST_HEADER include/spdk/notify.h 00:02:54.710 TEST_HEADER include/spdk/nvme.h 00:02:54.710 CC test/dma/test_dma/test_dma.o 00:02:54.710 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.710 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.710 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.710 CC examples/accel/perf/accel_perf.o 00:02:54.710 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.710 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.710 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.710 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.710 TEST_HEADER include/spdk/nvmf.h 00:02:54.710 CC test/bdev/bdevio/bdevio.o 00:02:54.710 CC test/app/bdev_svc/bdev_svc.o 00:02:54.710 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.710 CC test/blobfs/mkfs/mkfs.o 00:02:54.710 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.710 TEST_HEADER include/spdk/opal.h 00:02:54.710 TEST_HEADER include/spdk/opal_spec.h 00:02:54.710 TEST_HEADER include/spdk/pci_ids.h 00:02:54.710 TEST_HEADER include/spdk/pipe.h 00:02:54.710 TEST_HEADER include/spdk/queue.h 00:02:54.710 TEST_HEADER include/spdk/reduce.h 00:02:54.710 TEST_HEADER include/spdk/rpc.h 00:02:54.710 TEST_HEADER include/spdk/scheduler.h 00:02:54.710 TEST_HEADER include/spdk/scsi.h 00:02:54.710 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.710 TEST_HEADER include/spdk/sock.h 00:02:54.710 TEST_HEADER include/spdk/stdinc.h 00:02:54.710 TEST_HEADER include/spdk/string.h 00:02:54.710 TEST_HEADER include/spdk/thread.h 00:02:54.710 TEST_HEADER include/spdk/trace.h 00:02:54.710 TEST_HEADER include/spdk/trace_parser.h 00:02:54.710 TEST_HEADER include/spdk/tree.h 00:02:54.710 TEST_HEADER include/spdk/ublk.h 00:02:54.710 TEST_HEADER include/spdk/util.h 00:02:54.710 CC test/env/mem_callbacks/mem_callbacks.o 00:02:54.710 TEST_HEADER include/spdk/uuid.h 00:02:54.710 TEST_HEADER include/spdk/version.h 00:02:54.710 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.710 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.710 TEST_HEADER include/spdk/vhost.h 00:02:54.710 TEST_HEADER include/spdk/vmd.h 00:02:54.710 TEST_HEADER include/spdk/xor.h 00:02:54.710 TEST_HEADER include/spdk/zipf.h 00:02:54.710 CXX test/cpp_headers/accel.o 00:02:54.969 LINK event_perf 00:02:54.969 LINK bdev_svc 00:02:54.969 LINK mkfs 00:02:54.969 CXX test/cpp_headers/accel_module.o 00:02:54.969 LINK spdk_trace 00:02:54.969 LINK test_dma 00:02:55.227 LINK dif 00:02:55.228 CXX test/cpp_headers/assert.o 00:02:55.228 CC test/event/reactor/reactor.o 00:02:55.228 LINK bdevio 00:02:55.228 LINK accel_perf 00:02:55.228 CC app/trace_record/trace_record.o 00:02:55.228 CXX test/cpp_headers/barrier.o 00:02:55.228 LINK reactor 00:02:55.228 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:55.228 CXX test/cpp_headers/base64.o 00:02:55.228 CXX test/cpp_headers/bdev.o 00:02:55.486 CXX test/cpp_headers/bdev_module.o 00:02:55.486 CC test/lvol/esnap/esnap.o 00:02:55.486 LINK mem_callbacks 00:02:55.486 CC examples/bdev/hello_world/hello_bdev.o 00:02:55.486 CC test/event/reactor_perf/reactor_perf.o 00:02:55.486 LINK spdk_trace_record 00:02:55.486 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.486 CXX test/cpp_headers/bdev_zone.o 00:02:55.744 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.744 CC examples/bdev/bdevperf/bdevperf.o 00:02:55.744 CC test/env/vtophys/vtophys.o 00:02:55.744 LINK reactor_perf 00:02:55.744 LINK nvme_fuzz 00:02:55.744 LINK hello_bdev 00:02:55.744 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.744 CXX test/cpp_headers/bit_array.o 00:02:55.744 LINK vtophys 00:02:55.744 CC app/nvmf_tgt/nvmf_main.o 00:02:55.744 CXX test/cpp_headers/bit_pool.o 00:02:56.002 CC test/event/app_repeat/app_repeat.o 00:02:56.002 LINK nvmf_tgt 00:02:56.002 CXX test/cpp_headers/blob_bdev.o 00:02:56.002 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:56.002 CC test/app/histogram_perf/histogram_perf.o 00:02:56.002 CC test/env/memory/memory_ut.o 00:02:56.002 LINK app_repeat 00:02:56.260 LINK histogram_perf 00:02:56.260 LINK vhost_fuzz 00:02:56.260 LINK env_dpdk_post_init 00:02:56.260 CXX test/cpp_headers/blobfs_bdev.o 00:02:56.518 LINK bdevperf 00:02:56.518 CC app/iscsi_tgt/iscsi_tgt.o 00:02:56.518 CXX test/cpp_headers/blobfs.o 00:02:56.518 CC test/event/scheduler/scheduler.o 00:02:56.518 CC test/rpc_client/rpc_client_test.o 00:02:56.518 CC app/spdk_tgt/spdk_tgt.o 00:02:56.518 CC test/nvme/aer/aer.o 00:02:56.776 CXX test/cpp_headers/blob.o 00:02:56.776 LINK iscsi_tgt 00:02:56.776 LINK rpc_client_test 00:02:56.776 LINK spdk_tgt 00:02:56.776 LINK scheduler 00:02:56.776 CC examples/blob/hello_world/hello_blob.o 00:02:56.776 CXX test/cpp_headers/conf.o 00:02:56.776 LINK aer 00:02:57.034 CXX test/cpp_headers/config.o 00:02:57.034 CC examples/ioat/perf/perf.o 00:02:57.034 CXX test/cpp_headers/cpuset.o 00:02:57.034 CC app/spdk_lspci/spdk_lspci.o 00:02:57.034 LINK hello_blob 00:02:57.034 CC examples/nvme/hello_world/hello_world.o 00:02:57.034 CC test/nvme/reset/reset.o 00:02:57.034 CC examples/sock/hello_world/hello_sock.o 00:02:57.291 LINK iscsi_fuzz 00:02:57.291 LINK memory_ut 00:02:57.291 LINK spdk_lspci 00:02:57.291 CXX test/cpp_headers/crc16.o 00:02:57.291 LINK ioat_perf 00:02:57.291 LINK hello_world 00:02:57.291 LINK hello_sock 00:02:57.549 LINK reset 00:02:57.549 CXX test/cpp_headers/crc32.o 00:02:57.549 CC examples/blob/cli/blobcli.o 00:02:57.549 CC test/env/pci/pci_ut.o 00:02:57.549 CC app/spdk_nvme_perf/perf.o 00:02:57.549 CC test/app/jsoncat/jsoncat.o 00:02:57.549 CC examples/ioat/verify/verify.o 00:02:57.549 CC examples/nvme/reconnect/reconnect.o 00:02:57.549 CXX test/cpp_headers/crc64.o 00:02:57.808 CC test/app/stub/stub.o 00:02:57.808 CC test/nvme/sgl/sgl.o 00:02:57.808 LINK jsoncat 00:02:57.808 LINK verify 00:02:57.808 CXX test/cpp_headers/dif.o 00:02:57.808 LINK stub 00:02:57.808 LINK pci_ut 00:02:58.066 CC test/nvme/e2edp/nvme_dp.o 00:02:58.066 LINK blobcli 00:02:58.066 LINK reconnect 00:02:58.066 LINK sgl 00:02:58.066 CXX test/cpp_headers/dma.o 00:02:58.066 CC examples/vmd/lsvmd/lsvmd.o 00:02:58.066 CC examples/vmd/led/led.o 00:02:58.325 CXX test/cpp_headers/endian.o 00:02:58.325 LINK nvme_dp 00:02:58.325 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:58.325 LINK lsvmd 00:02:58.325 LINK led 00:02:58.325 CC examples/util/zipf/zipf.o 00:02:58.325 CXX test/cpp_headers/env_dpdk.o 00:02:58.325 CC examples/nvmf/nvmf/nvmf.o 00:02:58.583 CC examples/thread/thread/thread_ex.o 00:02:58.583 LINK spdk_nvme_perf 00:02:58.583 CC test/nvme/overhead/overhead.o 00:02:58.583 LINK zipf 00:02:58.583 CXX test/cpp_headers/env.o 00:02:58.583 CC test/nvme/err_injection/err_injection.o 00:02:58.583 CC test/nvme/startup/startup.o 00:02:58.912 LINK thread 00:02:58.912 CC app/spdk_nvme_identify/identify.o 00:02:58.912 LINK nvmf 00:02:58.912 LINK nvme_manage 00:02:58.912 CXX test/cpp_headers/event.o 00:02:58.912 LINK err_injection 00:02:58.912 LINK startup 00:02:58.912 LINK overhead 00:02:58.912 CC test/nvme/reserve/reserve.o 00:02:58.912 CXX test/cpp_headers/fd_group.o 00:02:58.912 CC examples/nvme/arbitration/arbitration.o 00:02:59.203 CC test/nvme/simple_copy/simple_copy.o 00:02:59.203 CC examples/nvme/hotplug/hotplug.o 00:02:59.203 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:59.203 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:59.203 LINK reserve 00:02:59.203 CXX test/cpp_headers/fd.o 00:02:59.203 CC examples/idxd/perf/perf.o 00:02:59.203 LINK cmb_copy 00:02:59.203 LINK simple_copy 00:02:59.203 LINK hotplug 00:02:59.203 CXX test/cpp_headers/file.o 00:02:59.203 LINK interrupt_tgt 00:02:59.461 LINK arbitration 00:02:59.461 CXX test/cpp_headers/ftl.o 00:02:59.461 CXX test/cpp_headers/gpt_spec.o 00:02:59.461 LINK idxd_perf 00:02:59.461 LINK spdk_nvme_identify 00:02:59.461 CC test/nvme/connect_stress/connect_stress.o 00:02:59.461 CC test/nvme/boot_partition/boot_partition.o 00:02:59.461 CC app/spdk_nvme_discover/discovery_aer.o 00:02:59.719 CXX test/cpp_headers/hexlify.o 00:02:59.719 CC examples/nvme/abort/abort.o 00:02:59.719 CXX test/cpp_headers/histogram_data.o 00:02:59.719 CC test/thread/poller_perf/poller_perf.o 00:02:59.719 CC app/spdk_top/spdk_top.o 00:02:59.719 LINK connect_stress 00:02:59.719 LINK boot_partition 00:02:59.719 CC app/vhost/vhost.o 00:02:59.719 LINK spdk_nvme_discover 00:02:59.977 CXX test/cpp_headers/idxd.o 00:02:59.977 CC test/nvme/compliance/nvme_compliance.o 00:02:59.977 LINK poller_perf 00:02:59.977 CXX test/cpp_headers/idxd_spec.o 00:02:59.977 CXX test/cpp_headers/init.o 00:02:59.977 LINK abort 00:02:59.977 LINK vhost 00:02:59.977 CXX test/cpp_headers/ioat.o 00:03:00.244 CXX test/cpp_headers/ioat_spec.o 00:03:00.244 CC test/nvme/fused_ordering/fused_ordering.o 00:03:00.244 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:00.244 LINK nvme_compliance 00:03:00.244 CC app/spdk_dd/spdk_dd.o 00:03:00.244 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:00.244 CXX test/cpp_headers/iscsi_spec.o 00:03:00.244 LINK pmr_persistence 00:03:00.244 LINK fused_ordering 00:03:00.244 CXX test/cpp_headers/json.o 00:03:00.505 CC test/nvme/fdp/fdp.o 00:03:00.505 CC test/nvme/cuse/cuse.o 00:03:00.505 LINK doorbell_aers 00:03:00.505 CXX test/cpp_headers/jsonrpc.o 00:03:00.505 CXX test/cpp_headers/keyring.o 00:03:00.505 CXX test/cpp_headers/keyring_module.o 00:03:00.505 LINK spdk_top 00:03:00.505 CXX test/cpp_headers/likely.o 00:03:00.764 CC app/fio/nvme/fio_plugin.o 00:03:00.764 LINK esnap 00:03:00.764 LINK spdk_dd 00:03:00.764 CXX test/cpp_headers/log.o 00:03:00.764 CXX test/cpp_headers/lvol.o 00:03:00.764 LINK fdp 00:03:00.764 CXX test/cpp_headers/memory.o 00:03:00.764 CXX test/cpp_headers/mmio.o 00:03:00.764 CC app/fio/bdev/fio_plugin.o 00:03:00.764 CXX test/cpp_headers/nbd.o 00:03:00.764 CXX test/cpp_headers/notify.o 00:03:00.764 CXX test/cpp_headers/nvme.o 00:03:00.764 CXX test/cpp_headers/nvme_intel.o 00:03:00.764 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.023 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.023 CXX test/cpp_headers/nvme_spec.o 00:03:01.023 CXX test/cpp_headers/nvme_zns.o 00:03:01.023 CXX test/cpp_headers/nvmf_cmd.o 00:03:01.023 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:01.023 CXX test/cpp_headers/nvmf.o 00:03:01.023 CXX test/cpp_headers/nvmf_spec.o 00:03:01.023 CXX test/cpp_headers/nvmf_transport.o 00:03:01.282 CXX test/cpp_headers/opal.o 00:03:01.282 LINK spdk_nvme 00:03:01.282 CXX test/cpp_headers/opal_spec.o 00:03:01.282 CXX test/cpp_headers/pci_ids.o 00:03:01.282 CXX test/cpp_headers/pipe.o 00:03:01.282 CXX test/cpp_headers/queue.o 00:03:01.282 CXX test/cpp_headers/reduce.o 00:03:01.282 CXX test/cpp_headers/rpc.o 00:03:01.282 LINK spdk_bdev 00:03:01.282 CXX test/cpp_headers/scheduler.o 00:03:01.282 CXX test/cpp_headers/scsi.o 00:03:01.282 CXX test/cpp_headers/scsi_spec.o 00:03:01.282 CXX test/cpp_headers/sock.o 00:03:01.540 CXX test/cpp_headers/stdinc.o 00:03:01.540 CXX test/cpp_headers/string.o 00:03:01.540 CXX test/cpp_headers/thread.o 00:03:01.540 CXX test/cpp_headers/trace.o 00:03:01.540 CXX test/cpp_headers/trace_parser.o 00:03:01.540 CXX test/cpp_headers/tree.o 00:03:01.540 CXX test/cpp_headers/ublk.o 00:03:01.540 CXX test/cpp_headers/util.o 00:03:01.540 CXX test/cpp_headers/uuid.o 00:03:01.540 CXX test/cpp_headers/version.o 00:03:01.540 CXX test/cpp_headers/vfio_user_pci.o 00:03:01.540 CXX test/cpp_headers/vfio_user_spec.o 00:03:01.540 CXX test/cpp_headers/vhost.o 00:03:01.540 CXX test/cpp_headers/vmd.o 00:03:01.540 CXX test/cpp_headers/xor.o 00:03:01.799 CXX test/cpp_headers/zipf.o 00:03:01.799 LINK cuse 00:03:01.799 00:03:01.799 real 1m3.135s 00:03:01.799 user 6m35.491s 00:03:01.799 sys 1m35.463s 00:03:01.799 18:24:15 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:01.799 18:24:15 make -- common/autotest_common.sh@10 -- $ set +x 00:03:01.799 ************************************ 00:03:01.799 END TEST make 00:03:01.799 ************************************ 00:03:02.058 18:24:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:02.058 18:24:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:02.058 18:24:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:02.059 18:24:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.059 18:24:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:02.059 18:24:15 -- pm/common@44 -- $ pid=5145 00:03:02.059 18:24:15 -- pm/common@50 -- $ kill -TERM 5145 00:03:02.059 18:24:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.059 18:24:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:02.059 18:24:15 -- pm/common@44 -- $ pid=5146 00:03:02.059 18:24:15 -- pm/common@50 -- $ kill -TERM 5146 00:03:02.059 18:24:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:02.059 18:24:15 -- nvmf/common.sh@7 -- # uname -s 00:03:02.059 18:24:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:02.059 18:24:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:02.059 18:24:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:02.059 18:24:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:02.059 18:24:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:02.059 18:24:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:02.059 18:24:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:02.059 18:24:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:02.059 18:24:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:02.059 18:24:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:02.059 18:24:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:03:02.059 18:24:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:03:02.059 18:24:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:02.059 18:24:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:02.059 18:24:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:02.059 18:24:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:02.059 18:24:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:02.059 18:24:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:02.059 18:24:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.059 18:24:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.059 18:24:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.059 18:24:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.059 18:24:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.059 18:24:15 -- paths/export.sh@5 -- # export PATH 00:03:02.059 18:24:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.059 18:24:15 -- nvmf/common.sh@47 -- # : 0 00:03:02.059 18:24:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:02.059 18:24:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:02.059 18:24:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:02.059 18:24:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:02.059 18:24:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:02.059 18:24:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:02.059 18:24:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:02.059 18:24:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:02.059 18:24:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:02.059 18:24:15 -- spdk/autotest.sh@32 -- # uname -s 00:03:02.059 18:24:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:02.059 18:24:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:02.059 18:24:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:02.059 18:24:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:02.059 18:24:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:02.059 18:24:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:02.059 18:24:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:02.059 18:24:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:02.059 18:24:15 -- spdk/autotest.sh@48 -- # udevadm_pid=52676 00:03:02.059 18:24:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:02.059 18:24:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:02.059 18:24:15 -- pm/common@17 -- # local monitor 00:03:02.059 18:24:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.059 18:24:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.059 18:24:15 -- pm/common@25 -- # sleep 1 00:03:02.059 18:24:15 -- pm/common@21 -- # date +%s 00:03:02.059 18:24:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715883855 00:03:02.059 18:24:15 -- pm/common@21 -- # date +%s 00:03:02.059 18:24:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715883855 00:03:02.059 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715883855_collect-cpu-load.pm.log 00:03:02.059 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715883855_collect-vmstat.pm.log 00:03:03.437 18:24:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:03.437 18:24:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:03.437 18:24:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:03.437 18:24:16 -- common/autotest_common.sh@10 -- # set +x 00:03:03.437 18:24:16 -- spdk/autotest.sh@59 -- # create_test_list 00:03:03.437 18:24:16 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:03.437 18:24:16 -- common/autotest_common.sh@10 -- # set +x 00:03:03.437 18:24:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:03.437 18:24:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:03.437 18:24:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:03.437 18:24:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:03.437 18:24:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:03.437 18:24:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:03.437 18:24:16 -- common/autotest_common.sh@1451 -- # uname 00:03:03.437 18:24:16 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:03.437 18:24:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:03.437 18:24:16 -- common/autotest_common.sh@1471 -- # uname 00:03:03.437 18:24:16 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:03.437 18:24:16 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:03.437 18:24:16 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:03.437 18:24:16 -- spdk/autotest.sh@72 -- # hash lcov 00:03:03.437 18:24:16 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:03.437 18:24:16 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:03.437 --rc lcov_branch_coverage=1 00:03:03.437 --rc lcov_function_coverage=1 00:03:03.437 --rc genhtml_branch_coverage=1 00:03:03.437 --rc genhtml_function_coverage=1 00:03:03.437 --rc genhtml_legend=1 00:03:03.437 --rc geninfo_all_blocks=1 00:03:03.437 ' 00:03:03.437 18:24:16 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:03.437 --rc lcov_branch_coverage=1 00:03:03.437 --rc lcov_function_coverage=1 00:03:03.437 --rc genhtml_branch_coverage=1 00:03:03.437 --rc genhtml_function_coverage=1 00:03:03.437 --rc genhtml_legend=1 00:03:03.437 --rc geninfo_all_blocks=1 00:03:03.437 ' 00:03:03.437 18:24:16 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:03.437 --rc lcov_branch_coverage=1 00:03:03.437 --rc lcov_function_coverage=1 00:03:03.437 --rc genhtml_branch_coverage=1 00:03:03.437 --rc genhtml_function_coverage=1 00:03:03.437 --rc genhtml_legend=1 00:03:03.437 --rc geninfo_all_blocks=1 00:03:03.437 --no-external' 00:03:03.437 18:24:16 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:03.437 --rc lcov_branch_coverage=1 00:03:03.437 --rc lcov_function_coverage=1 00:03:03.437 --rc genhtml_branch_coverage=1 00:03:03.437 --rc genhtml_function_coverage=1 00:03:03.437 --rc genhtml_legend=1 00:03:03.437 --rc geninfo_all_blocks=1 00:03:03.437 --no-external' 00:03:03.437 18:24:16 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:03.437 lcov: LCOV version 1.14 00:03:03.437 18:24:16 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:18.316 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:18.316 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:33.199 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:33.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:33.200 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:33.200 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:35.102 18:24:48 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:35.102 18:24:48 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:35.102 18:24:48 -- common/autotest_common.sh@10 -- # set +x 00:03:35.102 18:24:48 -- spdk/autotest.sh@91 -- # rm -f 00:03:35.102 18:24:48 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:36.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:36.037 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:36.037 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:36.038 18:24:49 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:36.038 18:24:49 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:36.038 18:24:49 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:36.038 18:24:49 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:36.038 18:24:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:36.038 18:24:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:36.038 18:24:49 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:36.038 18:24:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.038 18:24:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:36.038 18:24:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:36.038 18:24:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:36.038 18:24:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:36.038 18:24:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:36.038 18:24:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:36.038 18:24:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:36.038 18:24:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:03:36.038 18:24:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:03:36.038 18:24:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:36.038 18:24:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:36.038 18:24:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:36.038 18:24:49 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:03:36.038 18:24:49 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:03:36.038 18:24:49 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:36.038 18:24:49 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:36.038 18:24:49 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:36.038 18:24:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.038 18:24:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.038 18:24:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:36.038 18:24:49 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:36.038 18:24:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.038 No valid GPT data, bailing 00:03:36.038 18:24:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.038 18:24:49 -- scripts/common.sh@391 -- # pt= 00:03:36.038 18:24:49 -- scripts/common.sh@392 -- # return 1 00:03:36.038 18:24:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.038 1+0 records in 00:03:36.038 1+0 records out 00:03:36.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055586 s, 189 MB/s 00:03:36.038 18:24:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.038 18:24:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.038 18:24:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:36.038 18:24:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:36.038 18:24:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:36.038 No valid GPT data, bailing 00:03:36.038 18:24:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:36.038 18:24:49 -- scripts/common.sh@391 -- # pt= 00:03:36.038 18:24:49 -- scripts/common.sh@392 -- # return 1 00:03:36.038 18:24:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:36.038 1+0 records in 00:03:36.038 1+0 records out 00:03:36.038 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00335059 s, 313 MB/s 00:03:36.038 18:24:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.038 18:24:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.038 18:24:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:36.038 18:24:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:36.038 18:24:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:36.038 No valid GPT data, bailing 00:03:36.038 18:24:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:36.297 18:24:49 -- scripts/common.sh@391 -- # pt= 00:03:36.297 18:24:49 -- scripts/common.sh@392 -- # return 1 00:03:36.297 18:24:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:36.297 1+0 records in 00:03:36.297 1+0 records out 00:03:36.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00355584 s, 295 MB/s 00:03:36.297 18:24:49 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.297 18:24:49 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:36.297 18:24:49 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:36.297 18:24:49 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:36.297 18:24:49 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:36.297 No valid GPT data, bailing 00:03:36.297 18:24:49 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:36.297 18:24:49 -- scripts/common.sh@391 -- # pt= 00:03:36.297 18:24:49 -- scripts/common.sh@392 -- # return 1 00:03:36.297 18:24:49 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:36.297 1+0 records in 00:03:36.297 1+0 records out 00:03:36.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462784 s, 227 MB/s 00:03:36.297 18:24:49 -- spdk/autotest.sh@118 -- # sync 00:03:36.297 18:24:49 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.297 18:24:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.297 18:24:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:38.201 18:24:51 -- spdk/autotest.sh@124 -- # uname -s 00:03:38.201 18:24:51 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:38.201 18:24:51 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:38.201 18:24:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.201 18:24:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.201 18:24:51 -- common/autotest_common.sh@10 -- # set +x 00:03:38.201 ************************************ 00:03:38.201 START TEST setup.sh 00:03:38.201 ************************************ 00:03:38.201 18:24:51 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:38.201 * Looking for test storage... 00:03:38.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.201 18:24:51 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:38.201 18:24:51 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:38.201 18:24:51 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:38.201 18:24:51 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:38.201 18:24:51 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:38.201 18:24:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.201 ************************************ 00:03:38.201 START TEST acl 00:03:38.201 ************************************ 00:03:38.201 18:24:51 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:38.460 * Looking for test storage... 00:03:38.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:38.460 18:24:51 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:38.460 18:24:51 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:38.460 18:24:51 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:38.460 18:24:51 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:38.460 18:24:51 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:38.460 18:24:51 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:38.460 18:24:51 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:38.460 18:24:51 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.460 18:24:51 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:39.029 18:24:52 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:39.029 18:24:52 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:39.029 18:24:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.029 18:24:52 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:39.029 18:24:52 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.029 18:24:52 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.968 Hugepages 00:03:39.968 node hugesize free / total 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.968 00:03:39.968 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:39.968 18:24:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:39.968 18:24:53 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.968 18:24:53 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.968 18:24:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.968 ************************************ 00:03:39.968 START TEST denied 00:03:39.968 ************************************ 00:03:39.968 18:24:53 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:39.968 18:24:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:39.968 18:24:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:39.968 18:24:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.968 18:24:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:39.968 18:24:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:40.906 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.906 18:24:54 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:41.474 00:03:41.474 real 0m1.410s 00:03:41.474 user 0m0.556s 00:03:41.474 sys 0m0.784s 00:03:41.474 18:24:54 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:41.474 ************************************ 00:03:41.474 END TEST denied 00:03:41.474 ************************************ 00:03:41.474 18:24:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:41.474 18:24:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:41.474 18:24:54 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:41.474 18:24:54 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:41.474 18:24:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:41.474 ************************************ 00:03:41.474 START TEST allowed 00:03:41.474 ************************************ 00:03:41.474 18:24:54 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:41.474 18:24:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:41.474 18:24:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:41.474 18:24:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:41.474 18:24:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.474 18:24:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:42.409 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.409 18:24:55 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:42.977 00:03:42.977 real 0m1.520s 00:03:42.977 user 0m0.651s 00:03:42.977 sys 0m0.862s 00:03:42.977 18:24:56 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:42.977 ************************************ 00:03:42.977 END TEST allowed 00:03:42.977 ************************************ 00:03:42.977 18:24:56 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:42.977 00:03:42.977 real 0m4.728s 00:03:42.977 user 0m2.015s 00:03:42.977 sys 0m2.645s 00:03:42.977 18:24:56 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:42.977 18:24:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.977 ************************************ 00:03:42.977 END TEST acl 00:03:42.977 ************************************ 00:03:42.977 18:24:56 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:42.977 18:24:56 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:42.977 18:24:56 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:42.977 18:24:56 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.977 ************************************ 00:03:42.977 START TEST hugepages 00:03:42.977 ************************************ 00:03:42.977 18:24:56 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:43.237 * Looking for test storage... 00:03:43.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:43.237 18:24:56 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5614052 kB' 'MemAvailable: 7412596 kB' 'Buffers: 2436 kB' 'Cached: 2011248 kB' 'SwapCached: 0 kB' 'Active: 832020 kB' 'Inactive: 1285208 kB' 'Active(anon): 114032 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285208 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 105448 kB' 'Mapped: 48712 kB' 'Shmem: 10488 kB' 'KReclaimable: 65024 kB' 'Slab: 137872 kB' 'SReclaimable: 65024 kB' 'SUnreclaim: 72848 kB' 'KernelStack: 6476 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.238 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:43.239 18:24:56 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:43.239 18:24:56 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:43.239 18:24:56 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:43.239 18:24:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.239 ************************************ 00:03:43.239 START TEST default_setup 00:03:43.239 ************************************ 00:03:43.239 18:24:56 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:43.239 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:43.239 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:43.239 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:43.239 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:43.239 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:43.239 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.240 18:24:56 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:43.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7617744 kB' 'MemAvailable: 9416160 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848844 kB' 'Inactive: 1285220 kB' 'Active(anon): 130856 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285220 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121988 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6480 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.070 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7617744 kB' 'MemAvailable: 9416164 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848496 kB' 'Inactive: 1285224 kB' 'Active(anon): 130508 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121692 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137456 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72712 kB' 'KernelStack: 6480 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.071 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.072 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7617744 kB' 'MemAvailable: 9416164 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848260 kB' 'Inactive: 1285224 kB' 'Active(anon): 130272 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121436 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6480 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.073 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.074 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:44.075 nr_hugepages=1024 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.075 resv_hugepages=0 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.075 surplus_hugepages=0 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.075 anon_hugepages=0 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7618264 kB' 'MemAvailable: 9416684 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848520 kB' 'Inactive: 1285224 kB' 'Active(anon): 130532 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 121696 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6480 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.075 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.076 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.077 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7618364 kB' 'MemUsed: 4623604 kB' 'SwapCached: 0 kB' 'Active: 848548 kB' 'Inactive: 1285224 kB' 'Active(anon): 130560 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 2013676 kB' 'Mapped: 48720 kB' 'AnonPages: 121696 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.337 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.338 node0=1024 expecting 1024 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.338 00:03:44.338 real 0m1.013s 00:03:44.338 user 0m0.495s 00:03:44.338 sys 0m0.457s 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:44.338 18:24:57 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:44.338 ************************************ 00:03:44.338 END TEST default_setup 00:03:44.338 ************************************ 00:03:44.338 18:24:57 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:44.338 18:24:57 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:44.338 18:24:57 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.338 18:24:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.338 ************************************ 00:03:44.338 START TEST per_node_1G_alloc 00:03:44.338 ************************************ 00:03:44.338 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:44.338 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:44.338 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.339 18:24:57 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:44.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.601 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.601 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8684128 kB' 'MemAvailable: 10482548 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 849028 kB' 'Inactive: 1285224 kB' 'Active(anon): 131040 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122168 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137452 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 6468 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.601 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.602 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8684128 kB' 'MemAvailable: 10482548 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848296 kB' 'Inactive: 1285224 kB' 'Active(anon): 130308 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121456 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137452 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72708 kB' 'KernelStack: 6480 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.603 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.604 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8684128 kB' 'MemAvailable: 10482548 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848508 kB' 'Inactive: 1285224 kB' 'Active(anon): 130520 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121668 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137444 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72700 kB' 'KernelStack: 6464 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.605 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.606 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.607 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:44.869 nr_hugepages=512 00:03:44.869 resv_hugepages=0 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.869 surplus_hugepages=0 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.869 anon_hugepages=0 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8684128 kB' 'MemAvailable: 10482548 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848252 kB' 'Inactive: 1285224 kB' 'Active(anon): 130264 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121396 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137448 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72704 kB' 'KernelStack: 6448 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.869 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.870 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8684128 kB' 'MemUsed: 3557840 kB' 'SwapCached: 0 kB' 'Active: 848560 kB' 'Inactive: 1285224 kB' 'Active(anon): 130572 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 2013676 kB' 'Mapped: 48720 kB' 'AnonPages: 121704 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64744 kB' 'Slab: 137448 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.871 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.872 node0=512 expecting 512 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:44.872 00:03:44.872 real 0m0.517s 00:03:44.872 user 0m0.256s 00:03:44.872 sys 0m0.293s 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:44.872 18:24:58 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:44.872 ************************************ 00:03:44.872 END TEST per_node_1G_alloc 00:03:44.872 ************************************ 00:03:44.872 18:24:58 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:44.872 18:24:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:44.872 18:24:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:44.872 18:24:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:44.872 ************************************ 00:03:44.872 START TEST even_2G_alloc 00:03:44.872 ************************************ 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.872 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.873 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.134 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.134 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7639384 kB' 'MemAvailable: 9437804 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 849268 kB' 'Inactive: 1285224 kB' 'Active(anon): 131280 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122192 kB' 'Mapped: 48760 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137476 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72732 kB' 'KernelStack: 6548 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.134 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7643224 kB' 'MemAvailable: 9441644 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848912 kB' 'Inactive: 1285224 kB' 'Active(anon): 130924 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122088 kB' 'Mapped: 48820 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6516 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.135 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.136 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.137 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7642972 kB' 'MemAvailable: 9441392 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848372 kB' 'Inactive: 1285224 kB' 'Active(anon): 130384 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121492 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6448 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.400 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.401 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.402 nr_hugepages=1024 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.402 resv_hugepages=0 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.402 surplus_hugepages=0 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.402 anon_hugepages=0 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7642972 kB' 'MemAvailable: 9441392 kB' 'Buffers: 2436 kB' 'Cached: 2011240 kB' 'SwapCached: 0 kB' 'Active: 848592 kB' 'Inactive: 1285224 kB' 'Active(anon): 130604 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121712 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137456 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72712 kB' 'KernelStack: 6432 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.402 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.403 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7642720 kB' 'MemUsed: 4599248 kB' 'SwapCached: 0 kB' 'Active: 848552 kB' 'Inactive: 1285224 kB' 'Active(anon): 130564 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 2013676 kB' 'Mapped: 48720 kB' 'AnonPages: 121680 kB' 'Shmem: 10464 kB' 'KernelStack: 6484 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64744 kB' 'Slab: 137452 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.404 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.405 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.406 node0=1024 expecting 1024 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.406 00:03:45.406 real 0m0.515s 00:03:45.406 user 0m0.264s 00:03:45.406 sys 0m0.283s 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.406 18:24:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.406 ************************************ 00:03:45.406 END TEST even_2G_alloc 00:03:45.406 ************************************ 00:03:45.406 18:24:58 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:45.406 18:24:58 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.406 18:24:58 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.406 18:24:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.406 ************************************ 00:03:45.406 START TEST odd_alloc 00:03:45.406 ************************************ 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.406 18:24:58 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:45.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:45.667 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.667 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7654524 kB' 'MemAvailable: 9452948 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848980 kB' 'Inactive: 1285228 kB' 'Active(anon): 130992 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122100 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137440 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72696 kB' 'KernelStack: 6420 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.667 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.668 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.939 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7654776 kB' 'MemAvailable: 9453200 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848468 kB' 'Inactive: 1285228 kB' 'Active(anon): 130480 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121592 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137476 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72732 kB' 'KernelStack: 6480 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.940 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.941 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7654776 kB' 'MemAvailable: 9453200 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848412 kB' 'Inactive: 1285228 kB' 'Active(anon): 130424 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121536 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137476 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72732 kB' 'KernelStack: 6464 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.942 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:45.943 nr_hugepages=1025 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.943 resv_hugepages=0 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.943 surplus_hugepages=0 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.943 anon_hugepages=0 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.943 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7654776 kB' 'MemAvailable: 9453200 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848632 kB' 'Inactive: 1285228 kB' 'Active(anon): 130644 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121756 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137476 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72732 kB' 'KernelStack: 6448 kB' 'PageTables: 4128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.944 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7654776 kB' 'MemUsed: 4587192 kB' 'SwapCached: 0 kB' 'Active: 848460 kB' 'Inactive: 1285228 kB' 'Active(anon): 130472 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2013680 kB' 'Mapped: 48724 kB' 'AnonPages: 121624 kB' 'Shmem: 10464 kB' 'KernelStack: 6432 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64744 kB' 'Slab: 137468 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.945 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.946 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:45.947 node0=1025 expecting 1025 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:45.947 00:03:45.947 real 0m0.508s 00:03:45.947 user 0m0.255s 00:03:45.947 sys 0m0.284s 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.947 18:24:59 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:45.947 ************************************ 00:03:45.947 END TEST odd_alloc 00:03:45.947 ************************************ 00:03:45.947 18:24:59 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.947 18:24:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.947 18:24:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.947 18:24:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.947 ************************************ 00:03:45.947 START TEST custom_alloc 00:03:45.947 ************************************ 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.947 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:46.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.209 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.209 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8705756 kB' 'MemAvailable: 10504180 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 849284 kB' 'Inactive: 1285228 kB' 'Active(anon): 131296 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137500 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72756 kB' 'KernelStack: 6516 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.474 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8705756 kB' 'MemAvailable: 10504180 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848736 kB' 'Inactive: 1285228 kB' 'Active(anon): 130748 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121952 kB' 'Mapped: 48836 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137484 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72740 kB' 'KernelStack: 6452 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.475 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8705756 kB' 'MemAvailable: 10504180 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848564 kB' 'Inactive: 1285228 kB' 'Active(anon): 130576 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121704 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137500 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72756 kB' 'KernelStack: 6480 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.476 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.477 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:46.478 nr_hugepages=512 00:03:46.478 resv_hugepages=0 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:46.478 surplus_hugepages=0 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:46.478 anon_hugepages=0 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.478 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8713808 kB' 'MemAvailable: 10512232 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848516 kB' 'Inactive: 1285228 kB' 'Active(anon): 130528 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121632 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137500 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72756 kB' 'KernelStack: 6464 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.479 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8713808 kB' 'MemUsed: 3528160 kB' 'SwapCached: 0 kB' 'Active: 848312 kB' 'Inactive: 1285228 kB' 'Active(anon): 130324 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2013680 kB' 'Mapped: 48724 kB' 'AnonPages: 121468 kB' 'Shmem: 10464 kB' 'KernelStack: 6480 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64744 kB' 'Slab: 137496 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.480 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:46.481 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:46.482 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:46.482 node0=512 expecting 512 00:03:46.482 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:46.482 18:24:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:46.482 00:03:46.482 real 0m0.528s 00:03:46.482 user 0m0.266s 00:03:46.482 sys 0m0.296s 00:03:46.482 18:24:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:46.482 18:24:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:46.482 ************************************ 00:03:46.482 END TEST custom_alloc 00:03:46.482 ************************************ 00:03:46.482 18:24:59 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:46.482 18:24:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:46.482 18:24:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.482 18:24:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.482 ************************************ 00:03:46.482 START TEST no_shrink_alloc 00:03:46.482 ************************************ 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.482 18:24:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.052 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.052 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.052 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.052 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:47.052 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.052 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.052 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.052 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7664128 kB' 'MemAvailable: 9462552 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 849224 kB' 'Inactive: 1285228 kB' 'Active(anon): 131236 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122288 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137448 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72704 kB' 'KernelStack: 6496 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.053 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7664128 kB' 'MemAvailable: 9462552 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848968 kB' 'Inactive: 1285228 kB' 'Active(anon): 130980 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122004 kB' 'Mapped: 48912 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137448 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72704 kB' 'KernelStack: 6464 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.054 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.055 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7664128 kB' 'MemAvailable: 9462552 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848736 kB' 'Inactive: 1285228 kB' 'Active(anon): 130748 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 121692 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6460 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.056 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.057 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.058 nr_hugepages=1024 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.058 resv_hugepages=0 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.058 surplus_hugepages=0 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.058 anon_hugepages=0 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7664128 kB' 'MemAvailable: 9462552 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 848768 kB' 'Inactive: 1285228 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 121728 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 64744 kB' 'Slab: 137460 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72716 kB' 'KernelStack: 6444 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.058 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.059 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7664128 kB' 'MemUsed: 4577840 kB' 'SwapCached: 0 kB' 'Active: 848636 kB' 'Inactive: 1285228 kB' 'Active(anon): 130648 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2013680 kB' 'Mapped: 48724 kB' 'AnonPages: 121668 kB' 'Shmem: 10464 kB' 'KernelStack: 6460 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64744 kB' 'Slab: 137456 kB' 'SReclaimable: 64744 kB' 'SUnreclaim: 72712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.060 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.061 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.062 node0=1024 expecting 1024 00:03:47.062 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.062 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:47.062 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:47.062 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:47.062 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.062 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.320 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.320 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.320 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:47.320 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.320 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7665712 kB' 'MemAvailable: 9464132 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 844760 kB' 'Inactive: 1285228 kB' 'Active(anon): 126772 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117924 kB' 'Mapped: 48180 kB' 'Shmem: 10464 kB' 'KReclaimable: 64732 kB' 'Slab: 137296 kB' 'SReclaimable: 64732 kB' 'SUnreclaim: 72564 kB' 'KernelStack: 6392 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.585 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.586 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7665964 kB' 'MemAvailable: 9464384 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 844548 kB' 'Inactive: 1285228 kB' 'Active(anon): 126560 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117636 kB' 'Mapped: 48128 kB' 'Shmem: 10464 kB' 'KReclaimable: 64732 kB' 'Slab: 137256 kB' 'SReclaimable: 64732 kB' 'SUnreclaim: 72524 kB' 'KernelStack: 6308 kB' 'PageTables: 3588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.587 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.588 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.589 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7665712 kB' 'MemAvailable: 9464132 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 844156 kB' 'Inactive: 1285228 kB' 'Active(anon): 126168 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 117272 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 64732 kB' 'Slab: 137248 kB' 'SReclaimable: 64732 kB' 'SUnreclaim: 72516 kB' 'KernelStack: 6352 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.590 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:47.591 nr_hugepages=1024 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.591 resv_hugepages=0 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.591 surplus_hugepages=0 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.591 anon_hugepages=0 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.591 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7665712 kB' 'MemAvailable: 9464132 kB' 'Buffers: 2436 kB' 'Cached: 2011244 kB' 'SwapCached: 0 kB' 'Active: 844416 kB' 'Inactive: 1285228 kB' 'Active(anon): 126428 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 117532 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 64732 kB' 'Slab: 137248 kB' 'SReclaimable: 64732 kB' 'SUnreclaim: 72516 kB' 'KernelStack: 6352 kB' 'PageTables: 3640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 5066752 kB' 'DirectMap1G: 9437184 kB' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.592 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.593 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7665464 kB' 'MemUsed: 4576504 kB' 'SwapCached: 0 kB' 'Active: 844196 kB' 'Inactive: 1285228 kB' 'Active(anon): 126208 kB' 'Inactive(anon): 0 kB' 'Active(file): 717988 kB' 'Inactive(file): 1285228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 2013680 kB' 'Mapped: 47984 kB' 'AnonPages: 117316 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64732 kB' 'Slab: 137236 kB' 'SReclaimable: 64732 kB' 'SUnreclaim: 72504 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.594 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.595 node0=1024 expecting 1024 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.595 00:03:47.595 real 0m1.032s 00:03:47.595 user 0m0.512s 00:03:47.595 sys 0m0.586s 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:47.595 18:25:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:47.595 ************************************ 00:03:47.595 END TEST no_shrink_alloc 00:03:47.595 ************************************ 00:03:47.595 18:25:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:47.595 18:25:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:47.595 18:25:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.595 18:25:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.595 18:25:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.595 18:25:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.595 18:25:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:47.595 18:25:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:47.595 18:25:01 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:47.595 ************************************ 00:03:47.595 END TEST hugepages 00:03:47.595 ************************************ 00:03:47.595 00:03:47.595 real 0m4.587s 00:03:47.595 user 0m2.232s 00:03:47.595 sys 0m2.477s 00:03:47.595 18:25:01 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:47.595 18:25:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.595 18:25:01 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.595 18:25:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:47.595 18:25:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:47.595 18:25:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:47.595 ************************************ 00:03:47.595 START TEST driver 00:03:47.595 ************************************ 00:03:47.595 18:25:01 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:47.853 * Looking for test storage... 00:03:47.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:47.853 18:25:01 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:47.853 18:25:01 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.853 18:25:01 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:48.420 18:25:01 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:48.420 18:25:01 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.420 18:25:01 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.420 18:25:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:48.420 ************************************ 00:03:48.420 START TEST guess_driver 00:03:48.420 ************************************ 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:48.420 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:48.420 Looking for driver=uio_pci_generic 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.420 18:25:01 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.986 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:48.986 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:48.986 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.986 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.986 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:48.986 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.244 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:49.244 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:49.244 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:49.244 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:49.244 18:25:02 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:49.244 18:25:02 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.244 18:25:02 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.811 00:03:49.811 real 0m1.417s 00:03:49.811 user 0m0.498s 00:03:49.811 sys 0m0.906s 00:03:49.811 18:25:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.811 18:25:03 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.811 ************************************ 00:03:49.811 END TEST guess_driver 00:03:49.811 ************************************ 00:03:49.811 00:03:49.811 real 0m2.122s 00:03:49.811 user 0m0.726s 00:03:49.811 sys 0m1.437s 00:03:49.811 18:25:03 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:49.811 18:25:03 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:49.811 ************************************ 00:03:49.811 END TEST driver 00:03:49.811 ************************************ 00:03:49.811 18:25:03 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.811 18:25:03 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.811 18:25:03 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.811 18:25:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.811 ************************************ 00:03:49.811 START TEST devices 00:03:49.811 ************************************ 00:03:49.811 18:25:03 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:49.811 * Looking for test storage... 00:03:50.101 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:50.101 18:25:03 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:50.101 18:25:03 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:50.101 18:25:03 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.101 18:25:03 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.676 18:25:04 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:50.676 No valid GPT data, bailing 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:50.676 18:25:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:50.676 18:25:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:50.676 18:25:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.676 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:50.676 No valid GPT data, bailing 00:03:50.676 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:50.677 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.677 18:25:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:50.677 18:25:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:50.677 18:25:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:50.677 18:25:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:50.677 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:50.677 18:25:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:50.677 18:25:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:50.935 No valid GPT data, bailing 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:50.935 18:25:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:50.935 18:25:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:50.935 18:25:04 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:50.935 No valid GPT data, bailing 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:50.935 18:25:04 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:50.935 18:25:04 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:50.935 18:25:04 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:50.935 18:25:04 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:50.935 18:25:04 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:50.935 18:25:04 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.935 18:25:04 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.935 18:25:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.935 ************************************ 00:03:50.935 START TEST nvme_mount 00:03:50.935 ************************************ 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:50.935 18:25:04 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:51.870 Creating new GPT entries in memory. 00:03:51.870 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:51.870 other utilities. 00:03:51.870 18:25:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:51.870 18:25:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.870 18:25:05 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:51.870 18:25:05 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:51.870 18:25:05 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:53.247 Creating new GPT entries in memory. 00:03:53.247 The operation has completed successfully. 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56879 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.247 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:53.506 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:53.506 18:25:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:53.765 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.765 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:53.765 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:53.765 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.765 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.025 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.025 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:54.025 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.025 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.025 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.025 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.283 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.283 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.283 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.283 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.283 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.283 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.284 18:25:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.851 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.851 00:03:54.851 real 0m3.996s 00:03:54.851 user 0m0.718s 00:03:54.851 sys 0m1.016s 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.851 18:25:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:54.851 ************************************ 00:03:54.851 END TEST nvme_mount 00:03:54.851 ************************************ 00:03:55.109 18:25:08 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:55.109 18:25:08 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.109 18:25:08 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.109 18:25:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.109 ************************************ 00:03:55.109 START TEST dm_mount 00:03:55.109 ************************************ 00:03:55.109 18:25:08 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:03:55.109 18:25:08 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.110 18:25:08 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:56.069 Creating new GPT entries in memory. 00:03:56.069 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.069 other utilities. 00:03:56.069 18:25:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.069 18:25:09 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.069 18:25:09 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.069 18:25:09 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.069 18:25:09 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:57.006 Creating new GPT entries in memory. 00:03:57.006 The operation has completed successfully. 00:03:57.006 18:25:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:57.006 18:25:10 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.006 18:25:10 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.006 18:25:10 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.006 18:25:10 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:58.388 The operation has completed successfully. 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57312 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.388 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.389 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.389 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.650 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.650 18:25:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.650 18:25:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:58.910 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:59.168 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:59.168 00:03:59.168 real 0m4.224s 00:03:59.168 user 0m0.472s 00:03:59.168 sys 0m0.704s 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.168 ************************************ 00:03:59.168 END TEST dm_mount 00:03:59.168 ************************************ 00:03:59.168 18:25:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:59.168 18:25:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:59.168 18:25:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:59.168 18:25:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.168 18:25:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.168 18:25:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:59.168 18:25:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.168 18:25:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:59.426 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.426 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:59.426 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:59.426 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:59.426 18:25:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:59.426 18:25:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:59.426 18:25:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:59.426 18:25:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:59.426 18:25:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:59.426 18:25:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:59.426 18:25:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:59.426 00:03:59.426 real 0m9.696s 00:03:59.426 user 0m1.790s 00:03:59.426 sys 0m2.314s 00:03:59.426 18:25:12 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.426 18:25:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.685 ************************************ 00:03:59.685 END TEST devices 00:03:59.685 ************************************ 00:03:59.685 00:03:59.685 real 0m21.417s 00:03:59.685 user 0m6.866s 00:03:59.685 sys 0m9.047s 00:03:59.685 18:25:12 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.685 18:25:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.685 ************************************ 00:03:59.685 END TEST setup.sh 00:03:59.685 ************************************ 00:03:59.685 18:25:13 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:00.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.252 Hugepages 00:04:00.252 node hugesize free / total 00:04:00.252 node0 1048576kB 0 / 0 00:04:00.252 node0 2048kB 2048 / 2048 00:04:00.252 00:04:00.252 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:00.252 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:00.510 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:00.510 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:00.510 18:25:13 -- spdk/autotest.sh@130 -- # uname -s 00:04:00.510 18:25:13 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:00.510 18:25:13 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:00.510 18:25:13 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.340 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.340 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.340 18:25:14 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:02.288 18:25:15 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:02.288 18:25:15 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:02.288 18:25:15 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:02.288 18:25:15 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:02.288 18:25:15 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:02.288 18:25:15 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:02.288 18:25:15 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.288 18:25:15 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:02.288 18:25:15 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:02.545 18:25:15 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:04:02.545 18:25:15 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.545 18:25:15 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:02.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.802 Waiting for block devices as requested 00:04:02.802 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.802 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:03.061 18:25:16 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:03.061 18:25:16 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:04:03.061 18:25:16 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:04:03.061 18:25:16 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:03.061 18:25:16 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:03.061 18:25:16 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1553 -- # continue 00:04:03.061 18:25:16 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:03.061 18:25:16 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:04:03.061 18:25:16 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:03.061 18:25:16 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:03.061 18:25:16 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:04:03.061 18:25:16 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:03.061 18:25:16 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:03.061 18:25:16 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:03.061 18:25:16 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:03.061 18:25:16 -- common/autotest_common.sh@1553 -- # continue 00:04:03.061 18:25:16 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:03.061 18:25:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.061 18:25:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.061 18:25:16 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:03.061 18:25:16 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:03.061 18:25:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.061 18:25:16 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.885 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.885 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:03.885 18:25:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:03.885 18:25:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.885 18:25:17 -- common/autotest_common.sh@10 -- # set +x 00:04:03.885 18:25:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:03.885 18:25:17 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:03.885 18:25:17 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:03.885 18:25:17 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:03.885 18:25:17 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:03.885 18:25:17 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:03.885 18:25:17 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:03.885 18:25:17 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:03.885 18:25:17 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:03.885 18:25:17 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:03.885 18:25:17 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:04.144 18:25:17 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:04:04.144 18:25:17 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:04.144 18:25:17 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:04.144 18:25:17 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:04.144 18:25:17 -- common/autotest_common.sh@1576 -- # device=0x0010 00:04:04.144 18:25:17 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.144 18:25:17 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:04.144 18:25:17 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:04.144 18:25:17 -- common/autotest_common.sh@1576 -- # device=0x0010 00:04:04.144 18:25:17 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:04.144 18:25:17 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:04:04.144 18:25:17 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:04:04.144 18:25:17 -- common/autotest_common.sh@1589 -- # return 0 00:04:04.144 18:25:17 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:04.144 18:25:17 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:04.144 18:25:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.144 18:25:17 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:04.144 18:25:17 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:04.144 18:25:17 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:04.144 18:25:17 -- common/autotest_common.sh@10 -- # set +x 00:04:04.144 18:25:17 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:04.144 18:25:17 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:04.144 18:25:17 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:04.144 18:25:17 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.144 18:25:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.144 18:25:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.144 18:25:17 -- common/autotest_common.sh@10 -- # set +x 00:04:04.144 ************************************ 00:04:04.144 START TEST env 00:04:04.144 ************************************ 00:04:04.144 18:25:17 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:04.144 * Looking for test storage... 00:04:04.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:04.144 18:25:17 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.145 18:25:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.145 18:25:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.145 18:25:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.145 ************************************ 00:04:04.145 START TEST env_memory 00:04:04.145 ************************************ 00:04:04.145 18:25:17 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:04.145 00:04:04.145 00:04:04.145 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.145 http://cunit.sourceforge.net/ 00:04:04.145 00:04:04.145 00:04:04.145 Suite: memory 00:04:04.145 Test: alloc and free memory map ...[2024-05-16 18:25:17.591025] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:04.145 passed 00:04:04.145 Test: mem map translation ...[2024-05-16 18:25:17.622384] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:04.145 [2024-05-16 18:25:17.622430] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:04.145 [2024-05-16 18:25:17.622488] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:04.145 [2024-05-16 18:25:17.622499] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:04.404 passed 00:04:04.404 Test: mem map registration ...[2024-05-16 18:25:17.687435] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:04.404 [2024-05-16 18:25:17.687497] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:04.404 passed 00:04:04.404 Test: mem map adjacent registrations ...passed 00:04:04.404 00:04:04.404 Run Summary: Type Total Ran Passed Failed Inactive 00:04:04.404 suites 1 1 n/a 0 0 00:04:04.404 tests 4 4 4 0 0 00:04:04.404 asserts 152 152 152 0 n/a 00:04:04.404 00:04:04.404 Elapsed time = 0.220 seconds 00:04:04.404 00:04:04.404 real 0m0.232s 00:04:04.404 user 0m0.215s 00:04:04.404 sys 0m0.016s 00:04:04.404 18:25:17 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:04.404 18:25:17 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:04.404 ************************************ 00:04:04.404 END TEST env_memory 00:04:04.404 ************************************ 00:04:04.404 18:25:17 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.404 18:25:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:04.404 18:25:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:04.404 18:25:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:04.404 ************************************ 00:04:04.404 START TEST env_vtophys 00:04:04.404 ************************************ 00:04:04.404 18:25:17 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:04.404 EAL: lib.eal log level changed from notice to debug 00:04:04.404 EAL: Detected lcore 0 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 1 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 2 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 3 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 4 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 5 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 6 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 7 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 8 as core 0 on socket 0 00:04:04.404 EAL: Detected lcore 9 as core 0 on socket 0 00:04:04.404 EAL: Maximum logical cores by configuration: 128 00:04:04.404 EAL: Detected CPU lcores: 10 00:04:04.404 EAL: Detected NUMA nodes: 1 00:04:04.404 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:04.404 EAL: Detected shared linkage of DPDK 00:04:04.404 EAL: No shared files mode enabled, IPC will be disabled 00:04:04.404 EAL: Selected IOVA mode 'PA' 00:04:04.404 EAL: Probing VFIO support... 00:04:04.404 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.404 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:04.404 EAL: Ask a virtual area of 0x2e000 bytes 00:04:04.404 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:04.404 EAL: Setting up physically contiguous memory... 00:04:04.404 EAL: Setting maximum number of open files to 524288 00:04:04.404 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:04.404 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:04.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.404 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:04.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.404 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:04.404 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:04.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.404 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:04.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.404 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:04.404 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:04.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.404 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:04.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.404 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:04.404 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:04.404 EAL: Ask a virtual area of 0x61000 bytes 00:04:04.404 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:04.404 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:04.404 EAL: Ask a virtual area of 0x400000000 bytes 00:04:04.404 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:04.404 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:04.404 EAL: Hugepages will be freed exactly as allocated. 00:04:04.404 EAL: No shared files mode enabled, IPC is disabled 00:04:04.404 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: TSC frequency is ~2200000 KHz 00:04:04.664 EAL: Main lcore 0 is ready (tid=7f48290e4a00;cpuset=[0]) 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 0 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 2MB 00:04:04.664 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:04.664 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:04.664 EAL: Mem event callback 'spdk:(nil)' registered 00:04:04.664 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:04.664 00:04:04.664 00:04:04.664 CUnit - A unit testing framework for C - Version 2.1-3 00:04:04.664 http://cunit.sourceforge.net/ 00:04:04.664 00:04:04.664 00:04:04.664 Suite: components_suite 00:04:04.664 Test: vtophys_malloc_test ...passed 00:04:04.664 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 4 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 4MB 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was shrunk by 4MB 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 4 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 6MB 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was shrunk by 6MB 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 4 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 10MB 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was shrunk by 10MB 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 4 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 18MB 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was shrunk by 18MB 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 4 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 34MB 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was shrunk by 34MB 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 4 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 66MB 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was shrunk by 66MB 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.664 EAL: Restoring previous memory policy: 4 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was expanded by 130MB 00:04:04.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.664 EAL: request: mp_malloc_sync 00:04:04.664 EAL: No shared files mode enabled, IPC is disabled 00:04:04.664 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.664 EAL: Trying to obtain current memory policy. 00:04:04.664 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.923 EAL: Restoring previous memory policy: 4 00:04:04.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.923 EAL: request: mp_malloc_sync 00:04:04.923 EAL: No shared files mode enabled, IPC is disabled 00:04:04.923 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.923 EAL: request: mp_malloc_sync 00:04:04.923 EAL: No shared files mode enabled, IPC is disabled 00:04:04.923 EAL: Heap on socket 0 was shrunk by 258MB 00:04:04.923 EAL: Trying to obtain current memory policy. 00:04:04.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.923 EAL: Restoring previous memory policy: 4 00:04:04.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.923 EAL: request: mp_malloc_sync 00:04:04.923 EAL: No shared files mode enabled, IPC is disabled 00:04:04.923 EAL: Heap on socket 0 was expanded by 514MB 00:04:05.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.181 EAL: request: mp_malloc_sync 00:04:05.181 EAL: No shared files mode enabled, IPC is disabled 00:04:05.181 EAL: Heap on socket 0 was shrunk by 514MB 00:04:05.181 EAL: Trying to obtain current memory policy. 00:04:05.182 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.439 EAL: Restoring previous memory policy: 4 00:04:05.439 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.439 EAL: request: mp_malloc_sync 00:04:05.439 EAL: No shared files mode enabled, IPC is disabled 00:04:05.439 EAL: Heap on socket 0 was expanded by 1026MB 00:04:05.698 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.957 EAL: request: mp_malloc_sync 00:04:05.957 EAL: No shared files mode enabled, IPC is disabled 00:04:05.957 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:05.957 passed 00:04:05.957 00:04:05.957 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.957 suites 1 1 n/a 0 0 00:04:05.957 tests 2 2 2 0 0 00:04:05.957 asserts 5386 5386 5386 0 n/a 00:04:05.957 00:04:05.957 Elapsed time = 1.294 seconds 00:04:05.957 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.957 EAL: request: mp_malloc_sync 00:04:05.957 EAL: No shared files mode enabled, IPC is disabled 00:04:05.957 EAL: Heap on socket 0 was shrunk by 2MB 00:04:05.957 EAL: No shared files mode enabled, IPC is disabled 00:04:05.957 EAL: No shared files mode enabled, IPC is disabled 00:04:05.957 EAL: No shared files mode enabled, IPC is disabled 00:04:05.957 00:04:05.957 real 0m1.484s 00:04:05.957 user 0m0.806s 00:04:05.957 sys 0m0.541s 00:04:05.957 18:25:19 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.957 18:25:19 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:05.957 ************************************ 00:04:05.957 END TEST env_vtophys 00:04:05.957 ************************************ 00:04:05.957 18:25:19 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.957 18:25:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:05.957 18:25:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.957 18:25:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.957 ************************************ 00:04:05.957 START TEST env_pci 00:04:05.957 ************************************ 00:04:05.957 18:25:19 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:05.957 00:04:05.957 00:04:05.957 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.957 http://cunit.sourceforge.net/ 00:04:05.957 00:04:05.957 00:04:05.957 Suite: pci 00:04:05.957 Test: pci_hook ...[2024-05-16 18:25:19.371395] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58500 has claimed it 00:04:05.957 passed 00:04:05.957 00:04:05.957 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.957 suites 1 1 n/a 0 0 00:04:05.957 EAL: Cannot find device (10000:00:01.0) 00:04:05.957 EAL: Failed to attach device on primary process 00:04:05.957 tests 1 1 1 0 0 00:04:05.957 asserts 25 25 25 0 n/a 00:04:05.957 00:04:05.957 Elapsed time = 0.002 seconds 00:04:05.957 00:04:05.957 real 0m0.019s 00:04:05.957 user 0m0.010s 00:04:05.957 sys 0m0.008s 00:04:05.957 18:25:19 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.957 18:25:19 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:05.957 ************************************ 00:04:05.957 END TEST env_pci 00:04:05.957 ************************************ 00:04:05.957 18:25:19 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:05.957 18:25:19 env -- env/env.sh@15 -- # uname 00:04:05.957 18:25:19 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:05.957 18:25:19 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:05.957 18:25:19 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:05.957 18:25:19 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:05.957 18:25:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.957 18:25:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:05.957 ************************************ 00:04:05.957 START TEST env_dpdk_post_init 00:04:05.957 ************************************ 00:04:05.957 18:25:19 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:06.216 EAL: Detected CPU lcores: 10 00:04:06.216 EAL: Detected NUMA nodes: 1 00:04:06.216 EAL: Detected shared linkage of DPDK 00:04:06.216 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.216 EAL: Selected IOVA mode 'PA' 00:04:06.216 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.216 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:06.216 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:06.216 Starting DPDK initialization... 00:04:06.216 Starting SPDK post initialization... 00:04:06.216 SPDK NVMe probe 00:04:06.216 Attaching to 0000:00:10.0 00:04:06.216 Attaching to 0000:00:11.0 00:04:06.216 Attached to 0000:00:10.0 00:04:06.216 Attached to 0000:00:11.0 00:04:06.216 Cleaning up... 00:04:06.216 00:04:06.216 real 0m0.171s 00:04:06.216 user 0m0.033s 00:04:06.216 sys 0m0.039s 00:04:06.216 18:25:19 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.216 18:25:19 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.216 ************************************ 00:04:06.216 END TEST env_dpdk_post_init 00:04:06.216 ************************************ 00:04:06.216 18:25:19 env -- env/env.sh@26 -- # uname 00:04:06.216 18:25:19 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.216 18:25:19 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.216 18:25:19 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.216 18:25:19 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.216 18:25:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.216 ************************************ 00:04:06.216 START TEST env_mem_callbacks 00:04:06.216 ************************************ 00:04:06.216 18:25:19 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.216 EAL: Detected CPU lcores: 10 00:04:06.216 EAL: Detected NUMA nodes: 1 00:04:06.216 EAL: Detected shared linkage of DPDK 00:04:06.216 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.216 EAL: Selected IOVA mode 'PA' 00:04:06.475 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.475 00:04:06.475 00:04:06.475 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.475 http://cunit.sourceforge.net/ 00:04:06.475 00:04:06.475 00:04:06.475 Suite: memory 00:04:06.475 Test: test ... 00:04:06.475 register 0x200000200000 2097152 00:04:06.475 malloc 3145728 00:04:06.475 register 0x200000400000 4194304 00:04:06.475 buf 0x200000500000 len 3145728 PASSED 00:04:06.475 malloc 64 00:04:06.475 buf 0x2000004fff40 len 64 PASSED 00:04:06.475 malloc 4194304 00:04:06.475 register 0x200000800000 6291456 00:04:06.475 buf 0x200000a00000 len 4194304 PASSED 00:04:06.475 free 0x200000500000 3145728 00:04:06.475 free 0x2000004fff40 64 00:04:06.475 unregister 0x200000400000 4194304 PASSED 00:04:06.475 free 0x200000a00000 4194304 00:04:06.475 unregister 0x200000800000 6291456 PASSED 00:04:06.475 malloc 8388608 00:04:06.475 register 0x200000400000 10485760 00:04:06.475 buf 0x200000600000 len 8388608 PASSED 00:04:06.475 free 0x200000600000 8388608 00:04:06.475 unregister 0x200000400000 10485760 PASSED 00:04:06.475 passed 00:04:06.475 00:04:06.475 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.475 suites 1 1 n/a 0 0 00:04:06.475 tests 1 1 1 0 0 00:04:06.475 asserts 15 15 15 0 n/a 00:04:06.475 00:04:06.475 Elapsed time = 0.010 seconds 00:04:06.475 ************************************ 00:04:06.475 END TEST env_mem_callbacks 00:04:06.475 ************************************ 00:04:06.475 00:04:06.475 real 0m0.146s 00:04:06.475 user 0m0.022s 00:04:06.475 sys 0m0.022s 00:04:06.475 18:25:19 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.475 18:25:19 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:06.475 00:04:06.475 real 0m2.382s 00:04:06.475 user 0m1.203s 00:04:06.475 sys 0m0.828s 00:04:06.475 18:25:19 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.475 18:25:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.475 ************************************ 00:04:06.475 END TEST env 00:04:06.475 ************************************ 00:04:06.475 18:25:19 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.475 18:25:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.475 18:25:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.475 18:25:19 -- common/autotest_common.sh@10 -- # set +x 00:04:06.475 ************************************ 00:04:06.475 START TEST rpc 00:04:06.475 ************************************ 00:04:06.475 18:25:19 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:06.475 * Looking for test storage... 00:04:06.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:06.475 18:25:19 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58615 00:04:06.475 18:25:19 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:06.475 18:25:19 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.475 18:25:19 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58615 00:04:06.475 18:25:19 rpc -- common/autotest_common.sh@827 -- # '[' -z 58615 ']' 00:04:06.475 18:25:19 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.475 18:25:19 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:06.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.475 18:25:19 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.475 18:25:19 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:06.475 18:25:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.733 [2024-05-16 18:25:20.037792] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:06.734 [2024-05-16 18:25:20.037914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58615 ] 00:04:06.734 [2024-05-16 18:25:20.177292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.992 [2024-05-16 18:25:20.300706] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.992 [2024-05-16 18:25:20.300777] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58615' to capture a snapshot of events at runtime. 00:04:06.992 [2024-05-16 18:25:20.300788] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.992 [2024-05-16 18:25:20.300796] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.992 [2024-05-16 18:25:20.300803] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58615 for offline analysis/debug. 00:04:06.992 [2024-05-16 18:25:20.300833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.992 [2024-05-16 18:25:20.359029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:07.930 18:25:21 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:07.930 18:25:21 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:07.930 18:25:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.930 18:25:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:07.930 18:25:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:07.930 18:25:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:07.930 18:25:21 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:07.930 18:25:21 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.930 18:25:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.930 ************************************ 00:04:07.930 START TEST rpc_integrity 00:04:07.930 ************************************ 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.930 { 00:04:07.930 "name": "Malloc0", 00:04:07.930 "aliases": [ 00:04:07.930 "f206bb1c-d3d1-4deb-bf27-10fee58758ed" 00:04:07.930 ], 00:04:07.930 "product_name": "Malloc disk", 00:04:07.930 "block_size": 512, 00:04:07.930 "num_blocks": 16384, 00:04:07.930 "uuid": "f206bb1c-d3d1-4deb-bf27-10fee58758ed", 00:04:07.930 "assigned_rate_limits": { 00:04:07.930 "rw_ios_per_sec": 0, 00:04:07.930 "rw_mbytes_per_sec": 0, 00:04:07.930 "r_mbytes_per_sec": 0, 00:04:07.930 "w_mbytes_per_sec": 0 00:04:07.930 }, 00:04:07.930 "claimed": false, 00:04:07.930 "zoned": false, 00:04:07.930 "supported_io_types": { 00:04:07.930 "read": true, 00:04:07.930 "write": true, 00:04:07.930 "unmap": true, 00:04:07.930 "write_zeroes": true, 00:04:07.930 "flush": true, 00:04:07.930 "reset": true, 00:04:07.930 "compare": false, 00:04:07.930 "compare_and_write": false, 00:04:07.930 "abort": true, 00:04:07.930 "nvme_admin": false, 00:04:07.930 "nvme_io": false 00:04:07.930 }, 00:04:07.930 "memory_domains": [ 00:04:07.930 { 00:04:07.930 "dma_device_id": "system", 00:04:07.930 "dma_device_type": 1 00:04:07.930 }, 00:04:07.930 { 00:04:07.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.930 "dma_device_type": 2 00:04:07.930 } 00:04:07.930 ], 00:04:07.930 "driver_specific": {} 00:04:07.930 } 00:04:07.930 ]' 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.930 [2024-05-16 18:25:21.224254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.930 [2024-05-16 18:25:21.224307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.930 [2024-05-16 18:25:21.224333] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1d3d0 00:04:07.930 [2024-05-16 18:25:21.224343] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.930 [2024-05-16 18:25:21.226061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.930 [2024-05-16 18:25:21.226095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.930 Passthru0 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.930 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.930 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.930 { 00:04:07.930 "name": "Malloc0", 00:04:07.930 "aliases": [ 00:04:07.930 "f206bb1c-d3d1-4deb-bf27-10fee58758ed" 00:04:07.930 ], 00:04:07.930 "product_name": "Malloc disk", 00:04:07.930 "block_size": 512, 00:04:07.930 "num_blocks": 16384, 00:04:07.930 "uuid": "f206bb1c-d3d1-4deb-bf27-10fee58758ed", 00:04:07.930 "assigned_rate_limits": { 00:04:07.930 "rw_ios_per_sec": 0, 00:04:07.930 "rw_mbytes_per_sec": 0, 00:04:07.930 "r_mbytes_per_sec": 0, 00:04:07.930 "w_mbytes_per_sec": 0 00:04:07.930 }, 00:04:07.930 "claimed": true, 00:04:07.930 "claim_type": "exclusive_write", 00:04:07.930 "zoned": false, 00:04:07.930 "supported_io_types": { 00:04:07.930 "read": true, 00:04:07.930 "write": true, 00:04:07.930 "unmap": true, 00:04:07.930 "write_zeroes": true, 00:04:07.930 "flush": true, 00:04:07.930 "reset": true, 00:04:07.930 "compare": false, 00:04:07.930 "compare_and_write": false, 00:04:07.930 "abort": true, 00:04:07.930 "nvme_admin": false, 00:04:07.930 "nvme_io": false 00:04:07.930 }, 00:04:07.930 "memory_domains": [ 00:04:07.930 { 00:04:07.930 "dma_device_id": "system", 00:04:07.930 "dma_device_type": 1 00:04:07.930 }, 00:04:07.930 { 00:04:07.930 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.930 "dma_device_type": 2 00:04:07.930 } 00:04:07.930 ], 00:04:07.930 "driver_specific": {} 00:04:07.930 }, 00:04:07.930 { 00:04:07.930 "name": "Passthru0", 00:04:07.930 "aliases": [ 00:04:07.930 "a5dff9f7-4e9c-5922-9163-6a0fb7360c17" 00:04:07.930 ], 00:04:07.930 "product_name": "passthru", 00:04:07.930 "block_size": 512, 00:04:07.930 "num_blocks": 16384, 00:04:07.930 "uuid": "a5dff9f7-4e9c-5922-9163-6a0fb7360c17", 00:04:07.930 "assigned_rate_limits": { 00:04:07.930 "rw_ios_per_sec": 0, 00:04:07.930 "rw_mbytes_per_sec": 0, 00:04:07.930 "r_mbytes_per_sec": 0, 00:04:07.931 "w_mbytes_per_sec": 0 00:04:07.931 }, 00:04:07.931 "claimed": false, 00:04:07.931 "zoned": false, 00:04:07.931 "supported_io_types": { 00:04:07.931 "read": true, 00:04:07.931 "write": true, 00:04:07.931 "unmap": true, 00:04:07.931 "write_zeroes": true, 00:04:07.931 "flush": true, 00:04:07.931 "reset": true, 00:04:07.931 "compare": false, 00:04:07.931 "compare_and_write": false, 00:04:07.931 "abort": true, 00:04:07.931 "nvme_admin": false, 00:04:07.931 "nvme_io": false 00:04:07.931 }, 00:04:07.931 "memory_domains": [ 00:04:07.931 { 00:04:07.931 "dma_device_id": "system", 00:04:07.931 "dma_device_type": 1 00:04:07.931 }, 00:04:07.931 { 00:04:07.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.931 "dma_device_type": 2 00:04:07.931 } 00:04:07.931 ], 00:04:07.931 "driver_specific": { 00:04:07.931 "passthru": { 00:04:07.931 "name": "Passthru0", 00:04:07.931 "base_bdev_name": "Malloc0" 00:04:07.931 } 00:04:07.931 } 00:04:07.931 } 00:04:07.931 ]' 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.931 18:25:21 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.931 00:04:07.931 real 0m0.339s 00:04:07.931 user 0m0.239s 00:04:07.931 sys 0m0.034s 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:07.931 18:25:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.931 ************************************ 00:04:07.931 END TEST rpc_integrity 00:04:07.931 ************************************ 00:04:08.247 18:25:21 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:08.247 18:25:21 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.247 18:25:21 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.247 18:25:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 ************************************ 00:04:08.247 START TEST rpc_plugins 00:04:08.247 ************************************ 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:08.247 { 00:04:08.247 "name": "Malloc1", 00:04:08.247 "aliases": [ 00:04:08.247 "24176473-0b7d-4f2c-85c4-f3b66e766fca" 00:04:08.247 ], 00:04:08.247 "product_name": "Malloc disk", 00:04:08.247 "block_size": 4096, 00:04:08.247 "num_blocks": 256, 00:04:08.247 "uuid": "24176473-0b7d-4f2c-85c4-f3b66e766fca", 00:04:08.247 "assigned_rate_limits": { 00:04:08.247 "rw_ios_per_sec": 0, 00:04:08.247 "rw_mbytes_per_sec": 0, 00:04:08.247 "r_mbytes_per_sec": 0, 00:04:08.247 "w_mbytes_per_sec": 0 00:04:08.247 }, 00:04:08.247 "claimed": false, 00:04:08.247 "zoned": false, 00:04:08.247 "supported_io_types": { 00:04:08.247 "read": true, 00:04:08.247 "write": true, 00:04:08.247 "unmap": true, 00:04:08.247 "write_zeroes": true, 00:04:08.247 "flush": true, 00:04:08.247 "reset": true, 00:04:08.247 "compare": false, 00:04:08.247 "compare_and_write": false, 00:04:08.247 "abort": true, 00:04:08.247 "nvme_admin": false, 00:04:08.247 "nvme_io": false 00:04:08.247 }, 00:04:08.247 "memory_domains": [ 00:04:08.247 { 00:04:08.247 "dma_device_id": "system", 00:04:08.247 "dma_device_type": 1 00:04:08.247 }, 00:04:08.247 { 00:04:08.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.247 "dma_device_type": 2 00:04:08.247 } 00:04:08.247 ], 00:04:08.247 "driver_specific": {} 00:04:08.247 } 00:04:08.247 ]' 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:08.247 18:25:21 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:08.247 00:04:08.247 real 0m0.166s 00:04:08.247 user 0m0.112s 00:04:08.247 sys 0m0.019s 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.247 18:25:21 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 ************************************ 00:04:08.247 END TEST rpc_plugins 00:04:08.247 ************************************ 00:04:08.247 18:25:21 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:08.247 18:25:21 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.247 18:25:21 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.247 18:25:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 ************************************ 00:04:08.247 START TEST rpc_trace_cmd_test 00:04:08.247 ************************************ 00:04:08.247 18:25:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:08.247 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:08.247 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:08.247 18:25:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.247 18:25:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.247 18:25:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.247 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:08.247 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58615", 00:04:08.247 "tpoint_group_mask": "0x8", 00:04:08.247 "iscsi_conn": { 00:04:08.247 "mask": "0x2", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "scsi": { 00:04:08.247 "mask": "0x4", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "bdev": { 00:04:08.247 "mask": "0x8", 00:04:08.247 "tpoint_mask": "0xffffffffffffffff" 00:04:08.247 }, 00:04:08.247 "nvmf_rdma": { 00:04:08.247 "mask": "0x10", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "nvmf_tcp": { 00:04:08.247 "mask": "0x20", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "ftl": { 00:04:08.247 "mask": "0x40", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "blobfs": { 00:04:08.247 "mask": "0x80", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "dsa": { 00:04:08.247 "mask": "0x200", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "thread": { 00:04:08.247 "mask": "0x400", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "nvme_pcie": { 00:04:08.247 "mask": "0x800", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "iaa": { 00:04:08.247 "mask": "0x1000", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "nvme_tcp": { 00:04:08.247 "mask": "0x2000", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "bdev_nvme": { 00:04:08.247 "mask": "0x4000", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 }, 00:04:08.247 "sock": { 00:04:08.247 "mask": "0x8000", 00:04:08.247 "tpoint_mask": "0x0" 00:04:08.247 } 00:04:08.248 }' 00:04:08.248 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:08.505 00:04:08.505 real 0m0.287s 00:04:08.505 user 0m0.244s 00:04:08.505 sys 0m0.031s 00:04:08.505 ************************************ 00:04:08.505 END TEST rpc_trace_cmd_test 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.505 18:25:21 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:08.505 ************************************ 00:04:08.765 18:25:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:08.765 18:25:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:08.765 18:25:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:08.765 18:25:22 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:08.765 18:25:22 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:08.765 18:25:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 ************************************ 00:04:08.765 START TEST rpc_daemon_integrity 00:04:08.765 ************************************ 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.765 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:08.765 { 00:04:08.765 "name": "Malloc2", 00:04:08.765 "aliases": [ 00:04:08.765 "ee1f1960-7da5-4705-971b-8c4584c62da6" 00:04:08.765 ], 00:04:08.765 "product_name": "Malloc disk", 00:04:08.765 "block_size": 512, 00:04:08.765 "num_blocks": 16384, 00:04:08.765 "uuid": "ee1f1960-7da5-4705-971b-8c4584c62da6", 00:04:08.765 "assigned_rate_limits": { 00:04:08.765 "rw_ios_per_sec": 0, 00:04:08.766 "rw_mbytes_per_sec": 0, 00:04:08.766 "r_mbytes_per_sec": 0, 00:04:08.766 "w_mbytes_per_sec": 0 00:04:08.766 }, 00:04:08.766 "claimed": false, 00:04:08.766 "zoned": false, 00:04:08.766 "supported_io_types": { 00:04:08.766 "read": true, 00:04:08.766 "write": true, 00:04:08.766 "unmap": true, 00:04:08.766 "write_zeroes": true, 00:04:08.766 "flush": true, 00:04:08.766 "reset": true, 00:04:08.766 "compare": false, 00:04:08.766 "compare_and_write": false, 00:04:08.766 "abort": true, 00:04:08.766 "nvme_admin": false, 00:04:08.766 "nvme_io": false 00:04:08.766 }, 00:04:08.766 "memory_domains": [ 00:04:08.766 { 00:04:08.766 "dma_device_id": "system", 00:04:08.766 "dma_device_type": 1 00:04:08.766 }, 00:04:08.766 { 00:04:08.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.766 "dma_device_type": 2 00:04:08.766 } 00:04:08.766 ], 00:04:08.766 "driver_specific": {} 00:04:08.766 } 00:04:08.766 ]' 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.766 [2024-05-16 18:25:22.181302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:08.766 [2024-05-16 18:25:22.181367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:08.766 [2024-05-16 18:25:22.181386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc1caf0 00:04:08.766 [2024-05-16 18:25:22.181394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:08.766 [2024-05-16 18:25:22.183152] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:08.766 [2024-05-16 18:25:22.183186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:08.766 Passthru0 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:08.766 { 00:04:08.766 "name": "Malloc2", 00:04:08.766 "aliases": [ 00:04:08.766 "ee1f1960-7da5-4705-971b-8c4584c62da6" 00:04:08.766 ], 00:04:08.766 "product_name": "Malloc disk", 00:04:08.766 "block_size": 512, 00:04:08.766 "num_blocks": 16384, 00:04:08.766 "uuid": "ee1f1960-7da5-4705-971b-8c4584c62da6", 00:04:08.766 "assigned_rate_limits": { 00:04:08.766 "rw_ios_per_sec": 0, 00:04:08.766 "rw_mbytes_per_sec": 0, 00:04:08.766 "r_mbytes_per_sec": 0, 00:04:08.766 "w_mbytes_per_sec": 0 00:04:08.766 }, 00:04:08.766 "claimed": true, 00:04:08.766 "claim_type": "exclusive_write", 00:04:08.766 "zoned": false, 00:04:08.766 "supported_io_types": { 00:04:08.766 "read": true, 00:04:08.766 "write": true, 00:04:08.766 "unmap": true, 00:04:08.766 "write_zeroes": true, 00:04:08.766 "flush": true, 00:04:08.766 "reset": true, 00:04:08.766 "compare": false, 00:04:08.766 "compare_and_write": false, 00:04:08.766 "abort": true, 00:04:08.766 "nvme_admin": false, 00:04:08.766 "nvme_io": false 00:04:08.766 }, 00:04:08.766 "memory_domains": [ 00:04:08.766 { 00:04:08.766 "dma_device_id": "system", 00:04:08.766 "dma_device_type": 1 00:04:08.766 }, 00:04:08.766 { 00:04:08.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.766 "dma_device_type": 2 00:04:08.766 } 00:04:08.766 ], 00:04:08.766 "driver_specific": {} 00:04:08.766 }, 00:04:08.766 { 00:04:08.766 "name": "Passthru0", 00:04:08.766 "aliases": [ 00:04:08.766 "a2c8864f-3a6a-5582-826b-9569b743fefa" 00:04:08.766 ], 00:04:08.766 "product_name": "passthru", 00:04:08.766 "block_size": 512, 00:04:08.766 "num_blocks": 16384, 00:04:08.766 "uuid": "a2c8864f-3a6a-5582-826b-9569b743fefa", 00:04:08.766 "assigned_rate_limits": { 00:04:08.766 "rw_ios_per_sec": 0, 00:04:08.766 "rw_mbytes_per_sec": 0, 00:04:08.766 "r_mbytes_per_sec": 0, 00:04:08.766 "w_mbytes_per_sec": 0 00:04:08.766 }, 00:04:08.766 "claimed": false, 00:04:08.766 "zoned": false, 00:04:08.766 "supported_io_types": { 00:04:08.766 "read": true, 00:04:08.766 "write": true, 00:04:08.766 "unmap": true, 00:04:08.766 "write_zeroes": true, 00:04:08.766 "flush": true, 00:04:08.766 "reset": true, 00:04:08.766 "compare": false, 00:04:08.766 "compare_and_write": false, 00:04:08.766 "abort": true, 00:04:08.766 "nvme_admin": false, 00:04:08.766 "nvme_io": false 00:04:08.766 }, 00:04:08.766 "memory_domains": [ 00:04:08.766 { 00:04:08.766 "dma_device_id": "system", 00:04:08.766 "dma_device_type": 1 00:04:08.766 }, 00:04:08.766 { 00:04:08.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:08.766 "dma_device_type": 2 00:04:08.766 } 00:04:08.766 ], 00:04:08.766 "driver_specific": { 00:04:08.766 "passthru": { 00:04:08.766 "name": "Passthru0", 00:04:08.766 "base_bdev_name": "Malloc2" 00:04:08.766 } 00:04:08.766 } 00:04:08.766 } 00:04:08.766 ]' 00:04:08.766 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.025 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.026 00:04:09.026 real 0m0.325s 00:04:09.026 user 0m0.220s 00:04:09.026 sys 0m0.039s 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.026 ************************************ 00:04:09.026 END TEST rpc_daemon_integrity 00:04:09.026 18:25:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.026 ************************************ 00:04:09.026 18:25:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:09.026 18:25:22 rpc -- rpc/rpc.sh@84 -- # killprocess 58615 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@946 -- # '[' -z 58615 ']' 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@950 -- # kill -0 58615 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@951 -- # uname 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58615 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:09.026 killing process with pid 58615 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58615' 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@965 -- # kill 58615 00:04:09.026 18:25:22 rpc -- common/autotest_common.sh@970 -- # wait 58615 00:04:09.594 00:04:09.594 real 0m2.933s 00:04:09.594 user 0m3.876s 00:04:09.594 sys 0m0.670s 00:04:09.594 18:25:22 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.594 18:25:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.594 ************************************ 00:04:09.594 END TEST rpc 00:04:09.594 ************************************ 00:04:09.594 18:25:22 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:09.594 18:25:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.594 18:25:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.594 18:25:22 -- common/autotest_common.sh@10 -- # set +x 00:04:09.594 ************************************ 00:04:09.594 START TEST skip_rpc 00:04:09.594 ************************************ 00:04:09.594 18:25:22 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:09.594 * Looking for test storage... 00:04:09.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:09.594 18:25:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:09.594 18:25:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:09.594 18:25:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:09.594 18:25:22 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:09.594 18:25:22 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:09.594 18:25:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.594 ************************************ 00:04:09.594 START TEST skip_rpc 00:04:09.594 ************************************ 00:04:09.594 18:25:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:09.594 18:25:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58813 00:04:09.594 18:25:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.594 18:25:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:09.594 18:25:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:09.594 [2024-05-16 18:25:23.023226] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:09.594 [2024-05-16 18:25:23.023343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58813 ] 00:04:09.854 [2024-05-16 18:25:23.160482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.854 [2024-05-16 18:25:23.263682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.854 [2024-05-16 18:25:23.322555] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58813 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 58813 ']' 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 58813 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58813 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:15.136 killing process with pid 58813 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58813' 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 58813 00:04:15.136 18:25:27 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 58813 00:04:15.136 00:04:15.136 real 0m5.445s 00:04:15.136 user 0m5.064s 00:04:15.136 sys 0m0.288s 00:04:15.136 18:25:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.136 18:25:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.136 ************************************ 00:04:15.136 END TEST skip_rpc 00:04:15.136 ************************************ 00:04:15.136 18:25:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:15.136 18:25:28 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.136 18:25:28 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.136 18:25:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.136 ************************************ 00:04:15.136 START TEST skip_rpc_with_json 00:04:15.136 ************************************ 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58894 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58894 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 58894 ']' 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:15.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:15.136 18:25:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:15.136 [2024-05-16 18:25:28.523732] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:15.136 [2024-05-16 18:25:28.523853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:04:15.395 [2024-05-16 18:25:28.661865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.395 [2024-05-16 18:25:28.776130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.395 [2024-05-16 18:25:28.832651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.334 [2024-05-16 18:25:29.507515] nvmf_rpc.c:2548:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.334 request: 00:04:16.334 { 00:04:16.334 "trtype": "tcp", 00:04:16.334 "method": "nvmf_get_transports", 00:04:16.334 "req_id": 1 00:04:16.334 } 00:04:16.334 Got JSON-RPC error response 00:04:16.334 response: 00:04:16.334 { 00:04:16.334 "code": -19, 00:04:16.334 "message": "No such device" 00:04:16.334 } 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.334 [2024-05-16 18:25:29.519611] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.334 18:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.334 { 00:04:16.334 "subsystems": [ 00:04:16.334 { 00:04:16.334 "subsystem": "keyring", 00:04:16.334 "config": [] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "iobuf", 00:04:16.334 "config": [ 00:04:16.334 { 00:04:16.334 "method": "iobuf_set_options", 00:04:16.334 "params": { 00:04:16.334 "small_pool_count": 8192, 00:04:16.334 "large_pool_count": 1024, 00:04:16.334 "small_bufsize": 8192, 00:04:16.334 "large_bufsize": 135168 00:04:16.334 } 00:04:16.334 } 00:04:16.334 ] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "sock", 00:04:16.334 "config": [ 00:04:16.334 { 00:04:16.334 "method": "sock_set_default_impl", 00:04:16.334 "params": { 00:04:16.334 "impl_name": "uring" 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "sock_impl_set_options", 00:04:16.334 "params": { 00:04:16.334 "impl_name": "ssl", 00:04:16.334 "recv_buf_size": 4096, 00:04:16.334 "send_buf_size": 4096, 00:04:16.334 "enable_recv_pipe": true, 00:04:16.334 "enable_quickack": false, 00:04:16.334 "enable_placement_id": 0, 00:04:16.334 "enable_zerocopy_send_server": true, 00:04:16.334 "enable_zerocopy_send_client": false, 00:04:16.334 "zerocopy_threshold": 0, 00:04:16.334 "tls_version": 0, 00:04:16.334 "enable_ktls": false 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "sock_impl_set_options", 00:04:16.334 "params": { 00:04:16.334 "impl_name": "posix", 00:04:16.334 "recv_buf_size": 2097152, 00:04:16.334 "send_buf_size": 2097152, 00:04:16.334 "enable_recv_pipe": true, 00:04:16.334 "enable_quickack": false, 00:04:16.334 "enable_placement_id": 0, 00:04:16.334 "enable_zerocopy_send_server": true, 00:04:16.334 "enable_zerocopy_send_client": false, 00:04:16.334 "zerocopy_threshold": 0, 00:04:16.334 "tls_version": 0, 00:04:16.334 "enable_ktls": false 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "sock_impl_set_options", 00:04:16.334 "params": { 00:04:16.334 "impl_name": "uring", 00:04:16.334 "recv_buf_size": 2097152, 00:04:16.334 "send_buf_size": 2097152, 00:04:16.334 "enable_recv_pipe": true, 00:04:16.334 "enable_quickack": false, 00:04:16.334 "enable_placement_id": 0, 00:04:16.334 "enable_zerocopy_send_server": false, 00:04:16.334 "enable_zerocopy_send_client": false, 00:04:16.334 "zerocopy_threshold": 0, 00:04:16.334 "tls_version": 0, 00:04:16.334 "enable_ktls": false 00:04:16.334 } 00:04:16.334 } 00:04:16.334 ] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "vmd", 00:04:16.334 "config": [] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "accel", 00:04:16.334 "config": [ 00:04:16.334 { 00:04:16.334 "method": "accel_set_options", 00:04:16.334 "params": { 00:04:16.334 "small_cache_size": 128, 00:04:16.334 "large_cache_size": 16, 00:04:16.334 "task_count": 2048, 00:04:16.334 "sequence_count": 2048, 00:04:16.334 "buf_count": 2048 00:04:16.334 } 00:04:16.334 } 00:04:16.334 ] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "bdev", 00:04:16.334 "config": [ 00:04:16.334 { 00:04:16.334 "method": "bdev_set_options", 00:04:16.334 "params": { 00:04:16.334 "bdev_io_pool_size": 65535, 00:04:16.334 "bdev_io_cache_size": 256, 00:04:16.334 "bdev_auto_examine": true, 00:04:16.334 "iobuf_small_cache_size": 128, 00:04:16.334 "iobuf_large_cache_size": 16 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "bdev_raid_set_options", 00:04:16.334 "params": { 00:04:16.334 "process_window_size_kb": 1024 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "bdev_iscsi_set_options", 00:04:16.334 "params": { 00:04:16.334 "timeout_sec": 30 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "bdev_nvme_set_options", 00:04:16.334 "params": { 00:04:16.334 "action_on_timeout": "none", 00:04:16.334 "timeout_us": 0, 00:04:16.334 "timeout_admin_us": 0, 00:04:16.334 "keep_alive_timeout_ms": 10000, 00:04:16.334 "arbitration_burst": 0, 00:04:16.334 "low_priority_weight": 0, 00:04:16.334 "medium_priority_weight": 0, 00:04:16.334 "high_priority_weight": 0, 00:04:16.334 "nvme_adminq_poll_period_us": 10000, 00:04:16.334 "nvme_ioq_poll_period_us": 0, 00:04:16.334 "io_queue_requests": 0, 00:04:16.334 "delay_cmd_submit": true, 00:04:16.334 "transport_retry_count": 4, 00:04:16.334 "bdev_retry_count": 3, 00:04:16.334 "transport_ack_timeout": 0, 00:04:16.334 "ctrlr_loss_timeout_sec": 0, 00:04:16.334 "reconnect_delay_sec": 0, 00:04:16.334 "fast_io_fail_timeout_sec": 0, 00:04:16.334 "disable_auto_failback": false, 00:04:16.334 "generate_uuids": false, 00:04:16.334 "transport_tos": 0, 00:04:16.334 "nvme_error_stat": false, 00:04:16.334 "rdma_srq_size": 0, 00:04:16.334 "io_path_stat": false, 00:04:16.334 "allow_accel_sequence": false, 00:04:16.334 "rdma_max_cq_size": 0, 00:04:16.334 "rdma_cm_event_timeout_ms": 0, 00:04:16.334 "dhchap_digests": [ 00:04:16.334 "sha256", 00:04:16.334 "sha384", 00:04:16.334 "sha512" 00:04:16.334 ], 00:04:16.334 "dhchap_dhgroups": [ 00:04:16.334 "null", 00:04:16.334 "ffdhe2048", 00:04:16.334 "ffdhe3072", 00:04:16.334 "ffdhe4096", 00:04:16.334 "ffdhe6144", 00:04:16.334 "ffdhe8192" 00:04:16.334 ] 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "bdev_nvme_set_hotplug", 00:04:16.334 "params": { 00:04:16.334 "period_us": 100000, 00:04:16.334 "enable": false 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "bdev_wait_for_examine" 00:04:16.334 } 00:04:16.334 ] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "scsi", 00:04:16.334 "config": null 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "scheduler", 00:04:16.334 "config": [ 00:04:16.334 { 00:04:16.334 "method": "framework_set_scheduler", 00:04:16.334 "params": { 00:04:16.334 "name": "static" 00:04:16.334 } 00:04:16.334 } 00:04:16.334 ] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "vhost_scsi", 00:04:16.334 "config": [] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "vhost_blk", 00:04:16.334 "config": [] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "ublk", 00:04:16.334 "config": [] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "nbd", 00:04:16.334 "config": [] 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "subsystem": "nvmf", 00:04:16.334 "config": [ 00:04:16.334 { 00:04:16.334 "method": "nvmf_set_config", 00:04:16.334 "params": { 00:04:16.334 "discovery_filter": "match_any", 00:04:16.334 "admin_cmd_passthru": { 00:04:16.334 "identify_ctrlr": false 00:04:16.334 } 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "nvmf_set_max_subsystems", 00:04:16.334 "params": { 00:04:16.334 "max_subsystems": 1024 00:04:16.334 } 00:04:16.334 }, 00:04:16.334 { 00:04:16.334 "method": "nvmf_set_crdt", 00:04:16.334 "params": { 00:04:16.334 "crdt1": 0, 00:04:16.334 "crdt2": 0, 00:04:16.334 "crdt3": 0 00:04:16.334 } 00:04:16.335 }, 00:04:16.335 { 00:04:16.335 "method": "nvmf_create_transport", 00:04:16.335 "params": { 00:04:16.335 "trtype": "TCP", 00:04:16.335 "max_queue_depth": 128, 00:04:16.335 "max_io_qpairs_per_ctrlr": 127, 00:04:16.335 "in_capsule_data_size": 4096, 00:04:16.335 "max_io_size": 131072, 00:04:16.335 "io_unit_size": 131072, 00:04:16.335 "max_aq_depth": 128, 00:04:16.335 "num_shared_buffers": 511, 00:04:16.335 "buf_cache_size": 4294967295, 00:04:16.335 "dif_insert_or_strip": false, 00:04:16.335 "zcopy": false, 00:04:16.335 "c2h_success": true, 00:04:16.335 "sock_priority": 0, 00:04:16.335 "abort_timeout_sec": 1, 00:04:16.335 "ack_timeout": 0, 00:04:16.335 "data_wr_pool_size": 0 00:04:16.335 } 00:04:16.335 } 00:04:16.335 ] 00:04:16.335 }, 00:04:16.335 { 00:04:16.335 "subsystem": "iscsi", 00:04:16.335 "config": [ 00:04:16.335 { 00:04:16.335 "method": "iscsi_set_options", 00:04:16.335 "params": { 00:04:16.335 "node_base": "iqn.2016-06.io.spdk", 00:04:16.335 "max_sessions": 128, 00:04:16.335 "max_connections_per_session": 2, 00:04:16.335 "max_queue_depth": 64, 00:04:16.335 "default_time2wait": 2, 00:04:16.335 "default_time2retain": 20, 00:04:16.335 "first_burst_length": 8192, 00:04:16.335 "immediate_data": true, 00:04:16.335 "allow_duplicated_isid": false, 00:04:16.335 "error_recovery_level": 0, 00:04:16.335 "nop_timeout": 60, 00:04:16.335 "nop_in_interval": 30, 00:04:16.335 "disable_chap": false, 00:04:16.335 "require_chap": false, 00:04:16.335 "mutual_chap": false, 00:04:16.335 "chap_group": 0, 00:04:16.335 "max_large_datain_per_connection": 64, 00:04:16.335 "max_r2t_per_connection": 4, 00:04:16.335 "pdu_pool_size": 36864, 00:04:16.335 "immediate_data_pool_size": 16384, 00:04:16.335 "data_out_pool_size": 2048 00:04:16.335 } 00:04:16.335 } 00:04:16.335 ] 00:04:16.335 } 00:04:16.335 ] 00:04:16.335 } 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58894 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 58894 ']' 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 58894 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58894 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:16.335 killing process with pid 58894 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58894' 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 58894 00:04:16.335 18:25:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 58894 00:04:16.902 18:25:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58927 00:04:16.902 18:25:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.902 18:25:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58927 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 58927 ']' 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 58927 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 58927 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:22.182 killing process with pid 58927 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 58927' 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 58927 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 58927 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:22.182 00:04:22.182 real 0m7.115s 00:04:22.182 user 0m6.825s 00:04:22.182 sys 0m0.671s 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.182 ************************************ 00:04:22.182 END TEST skip_rpc_with_json 00:04:22.182 ************************************ 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.182 18:25:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.182 18:25:35 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.182 18:25:35 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.182 18:25:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.182 ************************************ 00:04:22.182 START TEST skip_rpc_with_delay 00:04:22.182 ************************************ 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:22.182 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.182 [2024-05-16 18:25:35.678713] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.182 [2024-05-16 18:25:35.678882] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:22.441 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:22.441 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:22.441 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:22.441 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:22.441 00:04:22.441 real 0m0.076s 00:04:22.441 user 0m0.045s 00:04:22.441 sys 0m0.030s 00:04:22.441 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:22.441 ************************************ 00:04:22.441 END TEST skip_rpc_with_delay 00:04:22.441 18:25:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.442 ************************************ 00:04:22.442 18:25:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.442 18:25:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.442 18:25:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.442 18:25:35 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:22.442 18:25:35 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.442 18:25:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.442 ************************************ 00:04:22.442 START TEST exit_on_failed_rpc_init 00:04:22.442 ************************************ 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59031 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59031 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 59031 ']' 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:22.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:22.442 18:25:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.442 [2024-05-16 18:25:35.820770] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:22.442 [2024-05-16 18:25:35.820915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59031 ] 00:04:22.700 [2024-05-16 18:25:35.968031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.700 [2024-05-16 18:25:36.090587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.700 [2024-05-16 18:25:36.149988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.639 18:25:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.639 [2024-05-16 18:25:36.885468] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:23.639 [2024-05-16 18:25:36.885575] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:04:23.639 [2024-05-16 18:25:37.024833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.639 [2024-05-16 18:25:37.137961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.639 [2024-05-16 18:25:37.138055] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:23.639 [2024-05-16 18:25:37.138069] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:23.639 [2024-05-16 18:25:37.138078] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59031 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 59031 ']' 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 59031 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59031 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:23.898 killing process with pid 59031 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59031' 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 59031 00:04:23.898 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 59031 00:04:24.465 00:04:24.465 real 0m1.911s 00:04:24.465 user 0m2.272s 00:04:24.465 sys 0m0.423s 00:04:24.465 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.465 18:25:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.465 ************************************ 00:04:24.465 END TEST exit_on_failed_rpc_init 00:04:24.465 ************************************ 00:04:24.465 18:25:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:24.465 00:04:24.465 real 0m14.838s 00:04:24.465 user 0m14.307s 00:04:24.465 sys 0m1.588s 00:04:24.465 18:25:37 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.465 18:25:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.465 ************************************ 00:04:24.465 END TEST skip_rpc 00:04:24.465 ************************************ 00:04:24.465 18:25:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.465 18:25:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.465 18:25:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.465 18:25:37 -- common/autotest_common.sh@10 -- # set +x 00:04:24.465 ************************************ 00:04:24.465 START TEST rpc_client 00:04:24.465 ************************************ 00:04:24.465 18:25:37 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:24.465 * Looking for test storage... 00:04:24.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:24.465 18:25:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:24.465 OK 00:04:24.465 18:25:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.465 00:04:24.465 real 0m0.106s 00:04:24.465 user 0m0.048s 00:04:24.465 sys 0m0.062s 00:04:24.465 18:25:37 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.465 18:25:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.465 ************************************ 00:04:24.465 END TEST rpc_client 00:04:24.465 ************************************ 00:04:24.465 18:25:37 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.466 18:25:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.466 18:25:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.466 18:25:37 -- common/autotest_common.sh@10 -- # set +x 00:04:24.466 ************************************ 00:04:24.466 START TEST json_config 00:04:24.466 ************************************ 00:04:24.466 18:25:37 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:24.466 18:25:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:24.466 18:25:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.724 18:25:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:24.725 18:25:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.725 18:25:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.725 18:25:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.725 18:25:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.725 18:25:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.725 18:25:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.725 18:25:37 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.725 18:25:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@47 -- # : 0 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:24.725 18:25:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.725 INFO: JSON configuration test init 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 18:25:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.725 18:25:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.725 18:25:37 json_config -- json_config/common.sh@10 -- # shift 00:04:24.725 18:25:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.725 18:25:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.725 18:25:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.725 18:25:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.725 18:25:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.725 18:25:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59167 00:04:24.725 Waiting for target to run... 00:04:24.725 18:25:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.725 18:25:37 json_config -- json_config/common.sh@25 -- # waitforlisten 59167 /var/tmp/spdk_tgt.sock 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@827 -- # '[' -z 59167 ']' 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.725 18:25:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:24.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:24.725 18:25:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.725 [2024-05-16 18:25:38.050987] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:24.725 [2024-05-16 18:25:38.051077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59167 ] 00:04:24.983 [2024-05-16 18:25:38.464109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.242 [2024-05-16 18:25:38.551385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.809 18:25:39 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:25.809 00:04:25.809 18:25:39 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:25.809 18:25:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.809 18:25:39 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:25.809 18:25:39 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:25.809 18:25:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:25.809 18:25:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 18:25:39 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:25.809 18:25:39 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:25.809 18:25:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:25.809 18:25:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.809 18:25:39 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.809 18:25:39 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:25.809 18:25:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.068 [2024-05-16 18:25:39.375286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:26.068 18:25:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:26.068 18:25:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.068 18:25:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:26.068 18:25:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.068 18:25:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.068 18:25:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.068 18:25:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.327 18:25:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:26.327 18:25:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:26.327 18:25:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:26.619 18:25:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.619 18:25:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:26.619 18:25:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:26.619 18:25:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:26.619 18:25:39 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.619 18:25:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.924 MallocForNvmf0 00:04:26.924 18:25:40 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.924 18:25:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.183 MallocForNvmf1 00:04:27.183 18:25:40 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.183 18:25:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.183 [2024-05-16 18:25:40.672408] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.442 18:25:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.442 18:25:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.700 18:25:40 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.700 18:25:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.700 18:25:41 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.700 18:25:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.959 18:25:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.959 18:25:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.218 [2024-05-16 18:25:41.628718] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:28.218 [2024-05-16 18:25:41.629025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:28.218 18:25:41 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:28.218 18:25:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.218 18:25:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.218 18:25:41 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:28.218 18:25:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.218 18:25:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.477 18:25:41 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:28.477 18:25:41 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.477 18:25:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.477 MallocBdevForConfigChangeCheck 00:04:28.736 18:25:41 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:28.736 18:25:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.736 18:25:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.736 18:25:42 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:28.736 18:25:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.995 INFO: shutting down applications... 00:04:28.995 18:25:42 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:28.995 18:25:42 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:28.995 18:25:42 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:28.995 18:25:42 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:28.995 18:25:42 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:29.561 Calling clear_iscsi_subsystem 00:04:29.561 Calling clear_nvmf_subsystem 00:04:29.561 Calling clear_nbd_subsystem 00:04:29.561 Calling clear_ublk_subsystem 00:04:29.561 Calling clear_vhost_blk_subsystem 00:04:29.561 Calling clear_vhost_scsi_subsystem 00:04:29.561 Calling clear_bdev_subsystem 00:04:29.561 18:25:42 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:29.561 18:25:42 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:29.561 18:25:42 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:29.561 18:25:42 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:29.561 18:25:42 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.561 18:25:42 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.819 18:25:43 json_config -- json_config/json_config.sh@345 -- # break 00:04:29.819 18:25:43 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:29.819 18:25:43 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:29.819 18:25:43 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.819 18:25:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.819 18:25:43 json_config -- json_config/common.sh@35 -- # [[ -n 59167 ]] 00:04:29.819 18:25:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59167 00:04:29.819 [2024-05-16 18:25:43.247433] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:29.819 18:25:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.819 18:25:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.819 18:25:43 json_config -- json_config/common.sh@41 -- # kill -0 59167 00:04:29.819 18:25:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:30.383 18:25:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:30.383 18:25:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.383 18:25:43 json_config -- json_config/common.sh@41 -- # kill -0 59167 00:04:30.383 18:25:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:30.383 18:25:43 json_config -- json_config/common.sh@43 -- # break 00:04:30.383 18:25:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:30.383 SPDK target shutdown done 00:04:30.383 18:25:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:30.383 INFO: relaunching applications... 00:04:30.383 18:25:43 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:30.383 18:25:43 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.383 18:25:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.383 18:25:43 json_config -- json_config/common.sh@10 -- # shift 00:04:30.383 18:25:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.383 18:25:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.383 18:25:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.383 18:25:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.383 18:25:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.383 18:25:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59363 00:04:30.383 18:25:43 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:30.383 Waiting for target to run... 00:04:30.383 18:25:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.383 18:25:43 json_config -- json_config/common.sh@25 -- # waitforlisten 59363 /var/tmp/spdk_tgt.sock 00:04:30.383 18:25:43 json_config -- common/autotest_common.sh@827 -- # '[' -z 59363 ']' 00:04:30.383 18:25:43 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.383 18:25:43 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:30.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.383 18:25:43 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.383 18:25:43 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:30.383 18:25:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.383 [2024-05-16 18:25:43.808683] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:30.383 [2024-05-16 18:25:43.808771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59363 ] 00:04:30.966 [2024-05-16 18:25:44.215433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.966 [2024-05-16 18:25:44.306081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.966 [2024-05-16 18:25:44.432142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:31.231 [2024-05-16 18:25:44.630534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.231 [2024-05-16 18:25:44.662421] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:31.231 [2024-05-16 18:25:44.662646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.490 18:25:44 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:31.490 18:25:44 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:31.490 00:04:31.490 18:25:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.490 18:25:44 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:31.490 18:25:44 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.490 INFO: Checking if target configuration is the same... 00:04:31.490 18:25:44 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.490 18:25:44 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:31.490 18:25:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.490 + '[' 2 -ne 2 ']' 00:04:31.490 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:31.490 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:31.490 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:31.490 +++ basename /dev/fd/62 00:04:31.490 ++ mktemp /tmp/62.XXX 00:04:31.490 + tmp_file_1=/tmp/62.jnG 00:04:31.490 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:31.490 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.490 + tmp_file_2=/tmp/spdk_tgt_config.json.Ypk 00:04:31.490 + ret=0 00:04:31.490 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.749 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:31.749 + diff -u /tmp/62.jnG /tmp/spdk_tgt_config.json.Ypk 00:04:31.749 INFO: JSON config files are the same 00:04:31.749 + echo 'INFO: JSON config files are the same' 00:04:31.749 + rm /tmp/62.jnG /tmp/spdk_tgt_config.json.Ypk 00:04:31.749 + exit 0 00:04:31.749 18:25:45 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:31.749 INFO: changing configuration and checking if this can be detected... 00:04:31.749 18:25:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.749 18:25:45 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.749 18:25:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.316 18:25:45 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.316 18:25:45 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:32.316 18:25:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.316 + '[' 2 -ne 2 ']' 00:04:32.316 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:32.316 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:32.316 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:32.316 +++ basename /dev/fd/62 00:04:32.316 ++ mktemp /tmp/62.XXX 00:04:32.316 + tmp_file_1=/tmp/62.UXN 00:04:32.316 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.316 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.316 + tmp_file_2=/tmp/spdk_tgt_config.json.Alk 00:04:32.316 + ret=0 00:04:32.316 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.575 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:32.575 + diff -u /tmp/62.UXN /tmp/spdk_tgt_config.json.Alk 00:04:32.575 + ret=1 00:04:32.575 + echo '=== Start of file: /tmp/62.UXN ===' 00:04:32.575 + cat /tmp/62.UXN 00:04:32.575 + echo '=== End of file: /tmp/62.UXN ===' 00:04:32.575 + echo '' 00:04:32.575 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Alk ===' 00:04:32.575 + cat /tmp/spdk_tgt_config.json.Alk 00:04:32.575 + echo '=== End of file: /tmp/spdk_tgt_config.json.Alk ===' 00:04:32.575 + echo '' 00:04:32.575 + rm /tmp/62.UXN /tmp/spdk_tgt_config.json.Alk 00:04:32.575 + exit 1 00:04:32.575 INFO: configuration change detected. 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@317 -- # [[ -n 59363 ]] 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.575 18:25:45 json_config -- json_config/json_config.sh@323 -- # killprocess 59363 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@946 -- # '[' -z 59363 ']' 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@950 -- # kill -0 59363 00:04:32.575 18:25:45 json_config -- common/autotest_common.sh@951 -- # uname 00:04:32.575 18:25:46 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:32.575 18:25:46 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59363 00:04:32.575 18:25:46 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:32.575 killing process with pid 59363 00:04:32.575 18:25:46 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:32.576 18:25:46 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59363' 00:04:32.576 18:25:46 json_config -- common/autotest_common.sh@965 -- # kill 59363 00:04:32.576 [2024-05-16 18:25:46.022078] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:32.576 18:25:46 json_config -- common/autotest_common.sh@970 -- # wait 59363 00:04:32.834 18:25:46 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.834 18:25:46 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:32.834 18:25:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.834 18:25:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.834 18:25:46 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:32.834 INFO: Success 00:04:32.835 18:25:46 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:32.835 ************************************ 00:04:32.835 END TEST json_config 00:04:32.835 ************************************ 00:04:32.835 00:04:32.835 real 0m8.401s 00:04:32.835 user 0m12.117s 00:04:32.835 sys 0m1.726s 00:04:32.835 18:25:46 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:32.835 18:25:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 18:25:46 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:33.094 18:25:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.094 18:25:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.094 18:25:46 -- common/autotest_common.sh@10 -- # set +x 00:04:33.094 ************************************ 00:04:33.094 START TEST json_config_extra_key 00:04:33.094 ************************************ 00:04:33.094 18:25:46 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.094 18:25:46 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.094 18:25:46 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.094 18:25:46 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.094 18:25:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.094 18:25:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.094 18:25:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.094 18:25:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:33.094 18:25:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.094 18:25:46 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.094 INFO: launching applications... 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:33.094 18:25:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59504 00:04:33.094 Waiting for target to run... 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59504 /var/tmp/spdk_tgt.sock 00:04:33.094 18:25:46 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:33.094 18:25:46 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 59504 ']' 00:04:33.094 18:25:46 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.095 18:25:46 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:33.095 18:25:46 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.095 18:25:46 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:33.095 18:25:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.095 [2024-05-16 18:25:46.487773] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:33.095 [2024-05-16 18:25:46.487896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59504 ] 00:04:33.663 [2024-05-16 18:25:46.893511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.663 [2024-05-16 18:25:46.993115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.663 [2024-05-16 18:25:47.015166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:34.230 18:25:47 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:34.230 18:25:47 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:34.230 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.230 INFO: shutting down applications... 00:04:34.230 18:25:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.230 18:25:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59504 ]] 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59504 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59504 00:04:34.230 18:25:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.799 18:25:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.799 18:25:48 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.799 18:25:48 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59504 00:04:34.799 18:25:48 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.799 18:25:48 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.799 18:25:48 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.799 SPDK target shutdown done 00:04:34.799 Success 00:04:34.799 18:25:48 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.799 18:25:48 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.799 ************************************ 00:04:34.799 END TEST json_config_extra_key 00:04:34.799 ************************************ 00:04:34.799 00:04:34.799 real 0m1.660s 00:04:34.799 user 0m1.614s 00:04:34.799 sys 0m0.422s 00:04:34.799 18:25:48 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:34.799 18:25:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.799 18:25:48 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.799 18:25:48 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:34.799 18:25:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:34.799 18:25:48 -- common/autotest_common.sh@10 -- # set +x 00:04:34.799 ************************************ 00:04:34.799 START TEST alias_rpc 00:04:34.799 ************************************ 00:04:34.799 18:25:48 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.799 * Looking for test storage... 00:04:34.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:34.799 18:25:48 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.799 18:25:48 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59568 00:04:34.799 18:25:48 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.799 18:25:48 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59568 00:04:34.799 18:25:48 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 59568 ']' 00:04:34.799 18:25:48 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.799 18:25:48 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:34.799 18:25:48 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.799 18:25:48 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:34.799 18:25:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.799 [2024-05-16 18:25:48.245994] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:34.799 [2024-05-16 18:25:48.246580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59568 ] 00:04:35.057 [2024-05-16 18:25:48.390573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.057 [2024-05-16 18:25:48.500387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.057 [2024-05-16 18:25:48.554389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:35.993 18:25:49 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:35.993 18:25:49 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:35.993 18:25:49 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:36.251 18:25:49 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59568 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 59568 ']' 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 59568 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59568 00:04:36.251 killing process with pid 59568 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59568' 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@965 -- # kill 59568 00:04:36.251 18:25:49 alias_rpc -- common/autotest_common.sh@970 -- # wait 59568 00:04:36.510 ************************************ 00:04:36.510 END TEST alias_rpc 00:04:36.510 ************************************ 00:04:36.510 00:04:36.510 real 0m1.891s 00:04:36.510 user 0m2.197s 00:04:36.510 sys 0m0.446s 00:04:36.510 18:25:49 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.510 18:25:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.510 18:25:50 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:36.510 18:25:50 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.510 18:25:50 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.510 18:25:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.510 18:25:50 -- common/autotest_common.sh@10 -- # set +x 00:04:36.767 ************************************ 00:04:36.767 START TEST spdkcli_tcp 00:04:36.767 ************************************ 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:36.767 * Looking for test storage... 00:04:36.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59644 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:36.767 18:25:50 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59644 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 59644 ']' 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:36.767 18:25:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.767 [2024-05-16 18:25:50.155464] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:36.767 [2024-05-16 18:25:50.156240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59644 ] 00:04:37.024 [2024-05-16 18:25:50.291745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:37.024 [2024-05-16 18:25:50.406215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.024 [2024-05-16 18:25:50.406224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.024 [2024-05-16 18:25:50.461507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:37.957 18:25:51 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:37.957 18:25:51 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:37.957 18:25:51 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:37.957 18:25:51 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59661 00:04:37.957 18:25:51 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:37.957 [ 00:04:37.957 "bdev_malloc_delete", 00:04:37.957 "bdev_malloc_create", 00:04:37.957 "bdev_null_resize", 00:04:37.957 "bdev_null_delete", 00:04:37.957 "bdev_null_create", 00:04:37.957 "bdev_nvme_cuse_unregister", 00:04:37.957 "bdev_nvme_cuse_register", 00:04:37.957 "bdev_opal_new_user", 00:04:37.957 "bdev_opal_set_lock_state", 00:04:37.957 "bdev_opal_delete", 00:04:37.957 "bdev_opal_get_info", 00:04:37.957 "bdev_opal_create", 00:04:37.957 "bdev_nvme_opal_revert", 00:04:37.957 "bdev_nvme_opal_init", 00:04:37.957 "bdev_nvme_send_cmd", 00:04:37.958 "bdev_nvme_get_path_iostat", 00:04:37.958 "bdev_nvme_get_mdns_discovery_info", 00:04:37.958 "bdev_nvme_stop_mdns_discovery", 00:04:37.958 "bdev_nvme_start_mdns_discovery", 00:04:37.958 "bdev_nvme_set_multipath_policy", 00:04:37.958 "bdev_nvme_set_preferred_path", 00:04:37.958 "bdev_nvme_get_io_paths", 00:04:37.958 "bdev_nvme_remove_error_injection", 00:04:37.958 "bdev_nvme_add_error_injection", 00:04:37.958 "bdev_nvme_get_discovery_info", 00:04:37.958 "bdev_nvme_stop_discovery", 00:04:37.958 "bdev_nvme_start_discovery", 00:04:37.958 "bdev_nvme_get_controller_health_info", 00:04:37.958 "bdev_nvme_disable_controller", 00:04:37.958 "bdev_nvme_enable_controller", 00:04:37.958 "bdev_nvme_reset_controller", 00:04:37.958 "bdev_nvme_get_transport_statistics", 00:04:37.958 "bdev_nvme_apply_firmware", 00:04:37.958 "bdev_nvme_detach_controller", 00:04:37.958 "bdev_nvme_get_controllers", 00:04:37.958 "bdev_nvme_attach_controller", 00:04:37.958 "bdev_nvme_set_hotplug", 00:04:37.958 "bdev_nvme_set_options", 00:04:37.958 "bdev_passthru_delete", 00:04:37.958 "bdev_passthru_create", 00:04:37.958 "bdev_lvol_set_parent_bdev", 00:04:37.958 "bdev_lvol_set_parent", 00:04:37.958 "bdev_lvol_check_shallow_copy", 00:04:37.958 "bdev_lvol_start_shallow_copy", 00:04:37.958 "bdev_lvol_grow_lvstore", 00:04:37.958 "bdev_lvol_get_lvols", 00:04:37.958 "bdev_lvol_get_lvstores", 00:04:37.958 "bdev_lvol_delete", 00:04:37.958 "bdev_lvol_set_read_only", 00:04:37.958 "bdev_lvol_resize", 00:04:37.958 "bdev_lvol_decouple_parent", 00:04:37.958 "bdev_lvol_inflate", 00:04:37.958 "bdev_lvol_rename", 00:04:37.958 "bdev_lvol_clone_bdev", 00:04:37.958 "bdev_lvol_clone", 00:04:37.958 "bdev_lvol_snapshot", 00:04:37.958 "bdev_lvol_create", 00:04:37.958 "bdev_lvol_delete_lvstore", 00:04:37.958 "bdev_lvol_rename_lvstore", 00:04:37.958 "bdev_lvol_create_lvstore", 00:04:37.958 "bdev_raid_set_options", 00:04:37.958 "bdev_raid_remove_base_bdev", 00:04:37.958 "bdev_raid_add_base_bdev", 00:04:37.958 "bdev_raid_delete", 00:04:37.958 "bdev_raid_create", 00:04:37.958 "bdev_raid_get_bdevs", 00:04:37.958 "bdev_error_inject_error", 00:04:37.958 "bdev_error_delete", 00:04:37.958 "bdev_error_create", 00:04:37.958 "bdev_split_delete", 00:04:37.958 "bdev_split_create", 00:04:37.958 "bdev_delay_delete", 00:04:37.958 "bdev_delay_create", 00:04:37.958 "bdev_delay_update_latency", 00:04:37.958 "bdev_zone_block_delete", 00:04:37.958 "bdev_zone_block_create", 00:04:37.958 "blobfs_create", 00:04:37.958 "blobfs_detect", 00:04:37.958 "blobfs_set_cache_size", 00:04:37.958 "bdev_aio_delete", 00:04:37.958 "bdev_aio_rescan", 00:04:37.958 "bdev_aio_create", 00:04:37.958 "bdev_ftl_set_property", 00:04:37.958 "bdev_ftl_get_properties", 00:04:37.958 "bdev_ftl_get_stats", 00:04:37.958 "bdev_ftl_unmap", 00:04:37.958 "bdev_ftl_unload", 00:04:37.958 "bdev_ftl_delete", 00:04:37.958 "bdev_ftl_load", 00:04:37.958 "bdev_ftl_create", 00:04:37.958 "bdev_virtio_attach_controller", 00:04:37.958 "bdev_virtio_scsi_get_devices", 00:04:37.958 "bdev_virtio_detach_controller", 00:04:37.958 "bdev_virtio_blk_set_hotplug", 00:04:37.958 "bdev_iscsi_delete", 00:04:37.958 "bdev_iscsi_create", 00:04:37.958 "bdev_iscsi_set_options", 00:04:37.958 "bdev_uring_delete", 00:04:37.958 "bdev_uring_rescan", 00:04:37.958 "bdev_uring_create", 00:04:37.958 "accel_error_inject_error", 00:04:37.958 "ioat_scan_accel_module", 00:04:37.958 "dsa_scan_accel_module", 00:04:37.958 "iaa_scan_accel_module", 00:04:37.958 "keyring_file_remove_key", 00:04:37.958 "keyring_file_add_key", 00:04:37.958 "iscsi_get_histogram", 00:04:37.958 "iscsi_enable_histogram", 00:04:37.958 "iscsi_set_options", 00:04:37.958 "iscsi_get_auth_groups", 00:04:37.958 "iscsi_auth_group_remove_secret", 00:04:37.958 "iscsi_auth_group_add_secret", 00:04:37.958 "iscsi_delete_auth_group", 00:04:37.958 "iscsi_create_auth_group", 00:04:37.958 "iscsi_set_discovery_auth", 00:04:37.958 "iscsi_get_options", 00:04:37.958 "iscsi_target_node_request_logout", 00:04:37.958 "iscsi_target_node_set_redirect", 00:04:37.958 "iscsi_target_node_set_auth", 00:04:37.958 "iscsi_target_node_add_lun", 00:04:37.958 "iscsi_get_stats", 00:04:37.958 "iscsi_get_connections", 00:04:37.958 "iscsi_portal_group_set_auth", 00:04:37.958 "iscsi_start_portal_group", 00:04:37.958 "iscsi_delete_portal_group", 00:04:37.958 "iscsi_create_portal_group", 00:04:37.958 "iscsi_get_portal_groups", 00:04:37.958 "iscsi_delete_target_node", 00:04:37.958 "iscsi_target_node_remove_pg_ig_maps", 00:04:37.958 "iscsi_target_node_add_pg_ig_maps", 00:04:37.958 "iscsi_create_target_node", 00:04:37.958 "iscsi_get_target_nodes", 00:04:37.958 "iscsi_delete_initiator_group", 00:04:37.958 "iscsi_initiator_group_remove_initiators", 00:04:37.958 "iscsi_initiator_group_add_initiators", 00:04:37.958 "iscsi_create_initiator_group", 00:04:37.958 "iscsi_get_initiator_groups", 00:04:37.958 "nvmf_set_crdt", 00:04:37.958 "nvmf_set_config", 00:04:37.958 "nvmf_set_max_subsystems", 00:04:37.958 "nvmf_stop_mdns_prr", 00:04:37.958 "nvmf_publish_mdns_prr", 00:04:37.958 "nvmf_subsystem_get_listeners", 00:04:37.958 "nvmf_subsystem_get_qpairs", 00:04:37.958 "nvmf_subsystem_get_controllers", 00:04:37.958 "nvmf_get_stats", 00:04:37.958 "nvmf_get_transports", 00:04:37.958 "nvmf_create_transport", 00:04:37.958 "nvmf_get_targets", 00:04:37.958 "nvmf_delete_target", 00:04:37.958 "nvmf_create_target", 00:04:37.958 "nvmf_subsystem_allow_any_host", 00:04:37.958 "nvmf_subsystem_remove_host", 00:04:37.958 "nvmf_subsystem_add_host", 00:04:37.958 "nvmf_ns_remove_host", 00:04:37.958 "nvmf_ns_add_host", 00:04:37.958 "nvmf_subsystem_remove_ns", 00:04:37.958 "nvmf_subsystem_add_ns", 00:04:37.958 "nvmf_subsystem_listener_set_ana_state", 00:04:37.958 "nvmf_discovery_get_referrals", 00:04:37.958 "nvmf_discovery_remove_referral", 00:04:37.958 "nvmf_discovery_add_referral", 00:04:37.958 "nvmf_subsystem_remove_listener", 00:04:37.958 "nvmf_subsystem_add_listener", 00:04:37.958 "nvmf_delete_subsystem", 00:04:37.958 "nvmf_create_subsystem", 00:04:37.958 "nvmf_get_subsystems", 00:04:37.958 "env_dpdk_get_mem_stats", 00:04:37.958 "nbd_get_disks", 00:04:37.958 "nbd_stop_disk", 00:04:37.958 "nbd_start_disk", 00:04:37.958 "ublk_recover_disk", 00:04:37.958 "ublk_get_disks", 00:04:37.958 "ublk_stop_disk", 00:04:37.958 "ublk_start_disk", 00:04:37.958 "ublk_destroy_target", 00:04:37.958 "ublk_create_target", 00:04:37.958 "virtio_blk_create_transport", 00:04:37.958 "virtio_blk_get_transports", 00:04:37.958 "vhost_controller_set_coalescing", 00:04:37.958 "vhost_get_controllers", 00:04:37.958 "vhost_delete_controller", 00:04:37.958 "vhost_create_blk_controller", 00:04:37.958 "vhost_scsi_controller_remove_target", 00:04:37.958 "vhost_scsi_controller_add_target", 00:04:37.958 "vhost_start_scsi_controller", 00:04:37.958 "vhost_create_scsi_controller", 00:04:37.958 "thread_set_cpumask", 00:04:37.958 "framework_get_scheduler", 00:04:37.958 "framework_set_scheduler", 00:04:37.958 "framework_get_reactors", 00:04:37.958 "thread_get_io_channels", 00:04:37.958 "thread_get_pollers", 00:04:37.958 "thread_get_stats", 00:04:37.958 "framework_monitor_context_switch", 00:04:37.958 "spdk_kill_instance", 00:04:37.958 "log_enable_timestamps", 00:04:37.958 "log_get_flags", 00:04:37.958 "log_clear_flag", 00:04:37.958 "log_set_flag", 00:04:37.958 "log_get_level", 00:04:37.958 "log_set_level", 00:04:37.958 "log_get_print_level", 00:04:37.958 "log_set_print_level", 00:04:37.958 "framework_enable_cpumask_locks", 00:04:37.958 "framework_disable_cpumask_locks", 00:04:37.958 "framework_wait_init", 00:04:37.958 "framework_start_init", 00:04:37.958 "scsi_get_devices", 00:04:37.958 "bdev_get_histogram", 00:04:37.958 "bdev_enable_histogram", 00:04:37.958 "bdev_set_qos_limit", 00:04:37.958 "bdev_set_qd_sampling_period", 00:04:37.958 "bdev_get_bdevs", 00:04:37.958 "bdev_reset_iostat", 00:04:37.958 "bdev_get_iostat", 00:04:37.958 "bdev_examine", 00:04:37.958 "bdev_wait_for_examine", 00:04:37.958 "bdev_set_options", 00:04:37.958 "notify_get_notifications", 00:04:37.958 "notify_get_types", 00:04:37.958 "accel_get_stats", 00:04:37.958 "accel_set_options", 00:04:37.958 "accel_set_driver", 00:04:37.958 "accel_crypto_key_destroy", 00:04:37.958 "accel_crypto_keys_get", 00:04:37.958 "accel_crypto_key_create", 00:04:37.958 "accel_assign_opc", 00:04:37.958 "accel_get_module_info", 00:04:37.958 "accel_get_opc_assignments", 00:04:37.958 "vmd_rescan", 00:04:37.958 "vmd_remove_device", 00:04:37.958 "vmd_enable", 00:04:37.958 "sock_get_default_impl", 00:04:37.958 "sock_set_default_impl", 00:04:37.958 "sock_impl_set_options", 00:04:37.958 "sock_impl_get_options", 00:04:37.958 "iobuf_get_stats", 00:04:37.958 "iobuf_set_options", 00:04:37.958 "framework_get_pci_devices", 00:04:37.958 "framework_get_config", 00:04:37.958 "framework_get_subsystems", 00:04:37.958 "trace_get_info", 00:04:37.958 "trace_get_tpoint_group_mask", 00:04:37.958 "trace_disable_tpoint_group", 00:04:37.958 "trace_enable_tpoint_group", 00:04:37.958 "trace_clear_tpoint_mask", 00:04:37.958 "trace_set_tpoint_mask", 00:04:37.958 "keyring_get_keys", 00:04:37.958 "spdk_get_version", 00:04:37.958 "rpc_get_methods" 00:04:37.958 ] 00:04:37.958 18:25:51 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:37.958 18:25:51 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.958 18:25:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:37.958 18:25:51 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:37.958 18:25:51 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59644 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 59644 ']' 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 59644 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59644 00:04:37.959 killing process with pid 59644 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59644' 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 59644 00:04:37.959 18:25:51 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 59644 00:04:38.525 ************************************ 00:04:38.525 END TEST spdkcli_tcp 00:04:38.525 ************************************ 00:04:38.525 00:04:38.525 real 0m1.851s 00:04:38.525 user 0m3.395s 00:04:38.525 sys 0m0.484s 00:04:38.525 18:25:51 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.525 18:25:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.525 18:25:51 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.525 18:25:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.525 18:25:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.525 18:25:51 -- common/autotest_common.sh@10 -- # set +x 00:04:38.525 ************************************ 00:04:38.525 START TEST dpdk_mem_utility 00:04:38.525 ************************************ 00:04:38.525 18:25:51 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:38.525 * Looking for test storage... 00:04:38.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:38.525 18:25:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:38.526 18:25:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59735 00:04:38.526 18:25:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.526 18:25:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59735 00:04:38.526 18:25:52 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 59735 ']' 00:04:38.526 18:25:52 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.526 18:25:52 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:38.526 18:25:52 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.526 18:25:52 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:38.526 18:25:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:38.797 [2024-05-16 18:25:52.056224] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:38.797 [2024-05-16 18:25:52.056535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59735 ] 00:04:38.797 [2024-05-16 18:25:52.187731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.797 [2024-05-16 18:25:52.292207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.056 [2024-05-16 18:25:52.347914] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.624 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.624 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:39.624 18:25:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:39.624 18:25:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:39.624 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:39.624 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.624 { 00:04:39.624 "filename": "/tmp/spdk_mem_dump.txt" 00:04:39.624 } 00:04:39.624 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:39.624 18:25:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:39.884 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:39.884 1 heaps totaling size 814.000000 MiB 00:04:39.884 size: 814.000000 MiB heap id: 0 00:04:39.884 end heaps---------- 00:04:39.884 8 mempools totaling size 598.116089 MiB 00:04:39.884 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:39.884 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:39.884 size: 84.521057 MiB name: bdev_io_59735 00:04:39.884 size: 51.011292 MiB name: evtpool_59735 00:04:39.884 size: 50.003479 MiB name: msgpool_59735 00:04:39.884 size: 21.763794 MiB name: PDU_Pool 00:04:39.884 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:39.884 size: 0.026123 MiB name: Session_Pool 00:04:39.884 end mempools------- 00:04:39.884 6 memzones totaling size 4.142822 MiB 00:04:39.884 size: 1.000366 MiB name: RG_ring_0_59735 00:04:39.884 size: 1.000366 MiB name: RG_ring_1_59735 00:04:39.884 size: 1.000366 MiB name: RG_ring_4_59735 00:04:39.884 size: 1.000366 MiB name: RG_ring_5_59735 00:04:39.884 size: 0.125366 MiB name: RG_ring_2_59735 00:04:39.884 size: 0.015991 MiB name: RG_ring_3_59735 00:04:39.884 end memzones------- 00:04:39.884 18:25:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:39.885 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:04:39.885 list of free elements. size: 12.471558 MiB 00:04:39.885 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:39.885 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:39.885 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:39.885 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:39.885 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:39.885 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:39.885 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:39.885 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:39.885 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:39.885 element at address: 0x20001aa00000 with size: 0.568420 MiB 00:04:39.885 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:39.885 element at address: 0x200000800000 with size: 0.486328 MiB 00:04:39.885 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:39.885 element at address: 0x200027e00000 with size: 0.396118 MiB 00:04:39.885 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:39.885 list of standard malloc elements. size: 199.265869 MiB 00:04:39.885 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:39.885 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:39.885 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:39.885 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:39.885 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:39.885 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:39.885 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:39.885 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:39.885 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:39.885 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:39.885 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:39.885 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:39.885 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91840 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:39.886 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e65740 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c340 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:39.886 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:39.887 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:39.887 list of memzone associated elements. size: 602.262573 MiB 00:04:39.887 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:39.887 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:39.887 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:39.887 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:39.887 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:39.887 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_59735_0 00:04:39.887 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:39.887 associated memzone info: size: 48.002930 MiB name: MP_evtpool_59735_0 00:04:39.887 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:39.887 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59735_0 00:04:39.887 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:39.887 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:39.887 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:39.887 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:39.887 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:39.887 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_59735 00:04:39.887 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:39.887 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59735 00:04:39.887 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:39.887 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59735 00:04:39.887 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:39.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:39.887 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:39.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:39.887 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:39.887 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:39.887 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:39.887 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:39.887 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:39.887 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59735 00:04:39.887 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:39.887 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59735 00:04:39.887 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:39.887 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59735 00:04:39.887 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:39.887 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59735 00:04:39.887 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:39.887 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59735 00:04:39.887 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:39.887 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:39.887 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:39.887 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:39.887 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:39.887 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:39.887 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:39.887 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59735 00:04:39.887 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:39.887 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:39.887 element at address: 0x200027e65800 with size: 0.023743 MiB 00:04:39.887 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:39.887 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:39.887 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59735 00:04:39.887 element at address: 0x200027e6b940 with size: 0.002441 MiB 00:04:39.887 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:39.887 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:39.887 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59735 00:04:39.887 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:39.887 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59735 00:04:39.887 element at address: 0x200027e6c400 with size: 0.000305 MiB 00:04:39.887 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:39.887 18:25:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:39.887 18:25:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59735 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 59735 ']' 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 59735 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59735 00:04:39.887 killing process with pid 59735 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59735' 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 59735 00:04:39.887 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 59735 00:04:40.454 00:04:40.454 real 0m1.741s 00:04:40.454 user 0m1.938s 00:04:40.454 sys 0m0.433s 00:04:40.454 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.454 ************************************ 00:04:40.454 END TEST dpdk_mem_utility 00:04:40.454 ************************************ 00:04:40.454 18:25:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.455 18:25:53 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.455 18:25:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.455 18:25:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.455 18:25:53 -- common/autotest_common.sh@10 -- # set +x 00:04:40.455 ************************************ 00:04:40.455 START TEST event 00:04:40.455 ************************************ 00:04:40.455 18:25:53 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:40.455 * Looking for test storage... 00:04:40.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:40.455 18:25:53 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:40.455 18:25:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:40.455 18:25:53 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.455 18:25:53 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:40.455 18:25:53 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.455 18:25:53 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.455 ************************************ 00:04:40.455 START TEST event_perf 00:04:40.455 ************************************ 00:04:40.455 18:25:53 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.455 Running I/O for 1 seconds...[2024-05-16 18:25:53.822554] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:40.455 [2024-05-16 18:25:53.822757] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59807 ] 00:04:40.714 [2024-05-16 18:25:53.956391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.714 [2024-05-16 18:25:54.067940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.714 [2024-05-16 18:25:54.068066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.714 [2024-05-16 18:25:54.068154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.714 [2024-05-16 18:25:54.068155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.649 Running I/O for 1 seconds... 00:04:41.649 lcore 0: 186004 00:04:41.649 lcore 1: 186004 00:04:41.649 lcore 2: 186005 00:04:41.649 lcore 3: 186005 00:04:41.908 done. 00:04:41.908 00:04:41.908 real 0m1.349s 00:04:41.908 user 0m4.164s 00:04:41.908 sys 0m0.066s 00:04:41.908 ************************************ 00:04:41.908 END TEST event_perf 00:04:41.908 ************************************ 00:04:41.908 18:25:55 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:41.908 18:25:55 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.908 18:25:55 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:41.908 18:25:55 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:41.908 18:25:55 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:41.908 18:25:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.908 ************************************ 00:04:41.908 START TEST event_reactor 00:04:41.908 ************************************ 00:04:41.908 18:25:55 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:41.908 [2024-05-16 18:25:55.228070] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:41.908 [2024-05-16 18:25:55.228170] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59845 ] 00:04:41.908 [2024-05-16 18:25:55.365868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.166 [2024-05-16 18:25:55.493742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.113 test_start 00:04:43.113 oneshot 00:04:43.113 tick 100 00:04:43.113 tick 100 00:04:43.113 tick 250 00:04:43.113 tick 100 00:04:43.113 tick 100 00:04:43.113 tick 500 00:04:43.113 tick 100 00:04:43.113 tick 250 00:04:43.113 tick 100 00:04:43.113 tick 100 00:04:43.113 tick 250 00:04:43.113 tick 100 00:04:43.113 tick 100 00:04:43.113 test_end 00:04:43.113 ************************************ 00:04:43.113 END TEST event_reactor 00:04:43.113 ************************************ 00:04:43.113 00:04:43.113 real 0m1.374s 00:04:43.113 user 0m1.211s 00:04:43.113 sys 0m0.057s 00:04:43.113 18:25:56 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.113 18:25:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.372 18:25:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.372 18:25:56 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:43.372 18:25:56 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.372 18:25:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.372 ************************************ 00:04:43.372 START TEST event_reactor_perf 00:04:43.372 ************************************ 00:04:43.372 18:25:56 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.372 [2024-05-16 18:25:56.648314] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:43.372 [2024-05-16 18:25:56.648424] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59881 ] 00:04:43.372 [2024-05-16 18:25:56.789406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.630 [2024-05-16 18:25:56.910356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.567 test_start 00:04:44.567 test_end 00:04:44.567 Performance: 378771 events per second 00:04:44.567 00:04:44.567 real 0m1.376s 00:04:44.567 user 0m1.209s 00:04:44.567 sys 0m0.060s 00:04:44.567 18:25:58 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.567 ************************************ 00:04:44.567 18:25:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.567 END TEST event_reactor_perf 00:04:44.567 ************************************ 00:04:44.567 18:25:58 event -- event/event.sh@49 -- # uname -s 00:04:44.567 18:25:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.567 18:25:58 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:44.567 18:25:58 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.567 18:25:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.567 18:25:58 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.567 ************************************ 00:04:44.567 START TEST event_scheduler 00:04:44.567 ************************************ 00:04:44.567 18:25:58 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:44.827 * Looking for test storage... 00:04:44.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:44.827 18:25:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.827 18:25:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59942 00:04:44.827 18:25:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.827 18:25:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.827 18:25:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59942 00:04:44.827 18:25:58 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 59942 ']' 00:04:44.827 18:25:58 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.827 18:25:58 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.827 18:25:58 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.827 18:25:58 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.827 18:25:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.827 [2024-05-16 18:25:58.194982] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:44.827 [2024-05-16 18:25:58.195319] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59942 ] 00:04:45.085 [2024-05-16 18:25:58.337244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.085 [2024-05-16 18:25:58.470555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.085 [2024-05-16 18:25:58.470646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.086 [2024-05-16 18:25:58.470788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.086 [2024-05-16 18:25:58.470781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:46.021 18:25:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.021 POWER: Env isn't set yet! 00:04:46.021 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:46.021 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.021 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.021 POWER: Attempting to initialise PSTAT power management... 00:04:46.021 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.021 POWER: Cannot set governor of lcore 0 to performance 00:04:46.021 POWER: Attempting to initialise AMD PSTATE power management... 00:04:46.021 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.021 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.021 POWER: Attempting to initialise CPPC power management... 00:04:46.021 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:46.021 POWER: Cannot set governor of lcore 0 to userspace 00:04:46.021 POWER: Attempting to initialise VM power management... 00:04:46.021 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:46.021 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:46.021 POWER: Unable to set Power Management Environment for lcore 0 00:04:46.021 [2024-05-16 18:25:59.248440] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:46.021 [2024-05-16 18:25:59.248455] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:46.021 [2024-05-16 18:25:59.248463] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.021 18:25:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.021 [2024-05-16 18:25:59.313095] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:46.021 [2024-05-16 18:25:59.350498] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.021 18:25:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.021 18:25:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.021 ************************************ 00:04:46.021 START TEST scheduler_create_thread 00:04:46.021 ************************************ 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.021 2 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.021 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.021 3 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 4 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 5 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 6 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 7 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 8 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 9 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 10 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:46.022 18:25:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.401 ************************************ 00:04:47.401 END TEST scheduler_create_thread 00:04:47.401 ************************************ 00:04:47.401 18:26:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.401 00:04:47.401 real 0m1.171s 00:04:47.401 user 0m0.019s 00:04:47.401 sys 0m0.005s 00:04:47.401 18:26:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.401 18:26:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.401 18:26:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:47.401 18:26:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59942 00:04:47.401 18:26:00 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 59942 ']' 00:04:47.401 18:26:00 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 59942 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 59942 00:04:47.402 killing process with pid 59942 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 59942' 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 59942 00:04:47.402 18:26:00 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 59942 00:04:47.660 [2024-05-16 18:26:01.013189] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:47.923 00:04:47.923 real 0m3.187s 00:04:47.923 user 0m5.877s 00:04:47.923 sys 0m0.360s 00:04:47.923 18:26:01 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.923 ************************************ 00:04:47.923 END TEST event_scheduler 00:04:47.923 ************************************ 00:04:47.923 18:26:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.923 18:26:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:47.923 18:26:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:47.923 18:26:01 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.923 18:26:01 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.923 18:26:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.923 ************************************ 00:04:47.923 START TEST app_repeat 00:04:47.923 ************************************ 00:04:47.923 18:26:01 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60025 00:04:47.923 Process app_repeat pid: 60025 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60025' 00:04:47.923 spdk_app_start Round 0 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:47.923 18:26:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60025 /var/tmp/spdk-nbd.sock 00:04:47.923 18:26:01 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 60025 ']' 00:04:47.923 18:26:01 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.923 18:26:01 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:47.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.923 18:26:01 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.923 18:26:01 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:47.923 18:26:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.923 [2024-05-16 18:26:01.331433] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:04:47.923 [2024-05-16 18:26:01.331521] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:04:48.183 [2024-05-16 18:26:01.465665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.183 [2024-05-16 18:26:01.596069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.183 [2024-05-16 18:26:01.596083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.183 [2024-05-16 18:26:01.654262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:49.121 18:26:02 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:49.121 18:26:02 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:49.121 18:26:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.380 Malloc0 00:04:49.380 18:26:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:49.640 Malloc1 00:04:49.640 18:26:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.640 18:26:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:49.899 /dev/nbd0 00:04:49.899 18:26:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:49.899 18:26:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.899 1+0 records in 00:04:49.899 1+0 records out 00:04:49.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034347 s, 11.9 MB/s 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:49.899 18:26:03 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:49.899 18:26:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.899 18:26:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.899 18:26:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:50.156 /dev/nbd1 00:04:50.156 18:26:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:50.156 18:26:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:50.156 1+0 records in 00:04:50.156 1+0 records out 00:04:50.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034027 s, 12.0 MB/s 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:50.156 18:26:03 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:50.156 18:26:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:50.156 18:26:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.156 18:26:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:50.156 18:26:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.156 18:26:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:50.415 { 00:04:50.415 "nbd_device": "/dev/nbd0", 00:04:50.415 "bdev_name": "Malloc0" 00:04:50.415 }, 00:04:50.415 { 00:04:50.415 "nbd_device": "/dev/nbd1", 00:04:50.415 "bdev_name": "Malloc1" 00:04:50.415 } 00:04:50.415 ]' 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:50.415 { 00:04:50.415 "nbd_device": "/dev/nbd0", 00:04:50.415 "bdev_name": "Malloc0" 00:04:50.415 }, 00:04:50.415 { 00:04:50.415 "nbd_device": "/dev/nbd1", 00:04:50.415 "bdev_name": "Malloc1" 00:04:50.415 } 00:04:50.415 ]' 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:50.415 /dev/nbd1' 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:50.415 /dev/nbd1' 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:50.415 256+0 records in 00:04:50.415 256+0 records out 00:04:50.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105493 s, 99.4 MB/s 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:50.415 256+0 records in 00:04:50.415 256+0 records out 00:04:50.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257505 s, 40.7 MB/s 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:50.415 18:26:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:50.674 256+0 records in 00:04:50.674 256+0 records out 00:04:50.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276912 s, 37.9 MB/s 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.674 18:26:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:50.931 18:26:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.188 18:26:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.447 18:26:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:51.447 18:26:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:51.447 18:26:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.447 18:26:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:51.705 18:26:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:51.705 18:26:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:51.964 18:26:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:51.964 [2024-05-16 18:26:05.457113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.222 [2024-05-16 18:26:05.572845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.222 [2024-05-16 18:26:05.572848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.222 [2024-05-16 18:26:05.630021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:52.222 [2024-05-16 18:26:05.630145] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:52.222 [2024-05-16 18:26:05.630170] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:54.759 spdk_app_start Round 1 00:04:54.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:54.760 18:26:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:54.760 18:26:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:54.760 18:26:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60025 /var/tmp/spdk-nbd.sock 00:04:54.760 18:26:08 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 60025 ']' 00:04:54.760 18:26:08 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:54.760 18:26:08 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:54.760 18:26:08 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:54.760 18:26:08 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:54.760 18:26:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:55.327 18:26:08 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:55.327 18:26:08 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:55.327 18:26:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.327 Malloc0 00:04:55.327 18:26:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:55.586 Malloc1 00:04:55.586 18:26:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.586 18:26:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:55.845 /dev/nbd0 00:04:55.845 18:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:55.845 18:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:55.845 1+0 records in 00:04:55.845 1+0 records out 00:04:55.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484829 s, 8.4 MB/s 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:55.845 18:26:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:55.845 18:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:55.845 18:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:55.845 18:26:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.104 /dev/nbd1 00:04:56.104 18:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.104 18:26:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.104 1+0 records in 00:04:56.104 1+0 records out 00:04:56.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219091 s, 18.7 MB/s 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:56.104 18:26:09 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:56.104 18:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.104 18:26:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.104 18:26:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.104 18:26:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.362 18:26:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.621 { 00:04:56.621 "nbd_device": "/dev/nbd0", 00:04:56.621 "bdev_name": "Malloc0" 00:04:56.621 }, 00:04:56.621 { 00:04:56.621 "nbd_device": "/dev/nbd1", 00:04:56.621 "bdev_name": "Malloc1" 00:04:56.621 } 00:04:56.621 ]' 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.621 { 00:04:56.621 "nbd_device": "/dev/nbd0", 00:04:56.621 "bdev_name": "Malloc0" 00:04:56.621 }, 00:04:56.621 { 00:04:56.621 "nbd_device": "/dev/nbd1", 00:04:56.621 "bdev_name": "Malloc1" 00:04:56.621 } 00:04:56.621 ]' 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.621 /dev/nbd1' 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:56.621 /dev/nbd1' 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:56.621 256+0 records in 00:04:56.621 256+0 records out 00:04:56.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00749831 s, 140 MB/s 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:56.621 256+0 records in 00:04:56.621 256+0 records out 00:04:56.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222934 s, 47.0 MB/s 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:56.621 18:26:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:56.621 256+0 records in 00:04:56.621 256+0 records out 00:04:56.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243144 s, 43.1 MB/s 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.621 18:26:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:56.880 18:26:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.139 18:26:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.397 18:26:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.397 18:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.397 18:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.676 18:26:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.676 18:26:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.949 18:26:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.949 [2024-05-16 18:26:11.430224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.207 [2024-05-16 18:26:11.540154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.207 [2024-05-16 18:26:11.540164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.207 [2024-05-16 18:26:11.595793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:58.207 [2024-05-16 18:26:11.595891] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.207 [2024-05-16 18:26:11.595906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:00.739 18:26:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:00.739 spdk_app_start Round 2 00:05:00.739 18:26:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:00.739 18:26:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60025 /var/tmp/spdk-nbd.sock 00:05:00.739 18:26:14 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 60025 ']' 00:05:00.739 18:26:14 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:00.739 18:26:14 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:00.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:00.739 18:26:14 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:00.739 18:26:14 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:00.739 18:26:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:00.998 18:26:14 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.998 18:26:14 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:00.998 18:26:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.565 Malloc0 00:05:01.565 18:26:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:01.565 Malloc1 00:05:01.565 18:26:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.565 18:26:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:01.823 /dev/nbd0 00:05:01.823 18:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:01.823 18:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:01.823 18:26:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:01.823 18:26:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:01.823 18:26:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:01.823 18:26:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:01.823 18:26:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:01.824 1+0 records in 00:05:01.824 1+0 records out 00:05:01.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654579 s, 6.3 MB/s 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:01.824 18:26:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:01.824 18:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:01.824 18:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:01.824 18:26:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.081 /dev/nbd1 00:05:02.081 18:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.081 18:26:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:02.081 18:26:15 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:02.082 18:26:15 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.082 1+0 records in 00:05:02.082 1+0 records out 00:05:02.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279062 s, 14.7 MB/s 00:05:02.082 18:26:15 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.082 18:26:15 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:02.082 18:26:15 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:02.082 18:26:15 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:02.082 18:26:15 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:02.082 18:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.082 18:26:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.082 18:26:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.082 18:26:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.082 18:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:02.340 18:26:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:02.340 { 00:05:02.340 "nbd_device": "/dev/nbd0", 00:05:02.340 "bdev_name": "Malloc0" 00:05:02.340 }, 00:05:02.340 { 00:05:02.340 "nbd_device": "/dev/nbd1", 00:05:02.340 "bdev_name": "Malloc1" 00:05:02.340 } 00:05:02.340 ]' 00:05:02.340 18:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:02.340 18:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:02.340 { 00:05:02.340 "nbd_device": "/dev/nbd0", 00:05:02.340 "bdev_name": "Malloc0" 00:05:02.340 }, 00:05:02.340 { 00:05:02.340 "nbd_device": "/dev/nbd1", 00:05:02.340 "bdev_name": "Malloc1" 00:05:02.340 } 00:05:02.340 ]' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:02.628 /dev/nbd1' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:02.628 /dev/nbd1' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:02.628 256+0 records in 00:05:02.628 256+0 records out 00:05:02.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495727 s, 212 MB/s 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:02.628 256+0 records in 00:05:02.628 256+0 records out 00:05:02.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025214 s, 41.6 MB/s 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:02.628 256+0 records in 00:05:02.628 256+0 records out 00:05:02.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294292 s, 35.6 MB/s 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.628 18:26:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:02.907 18:26:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.166 18:26:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.425 18:26:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.425 18:26:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:03.683 18:26:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:03.941 [2024-05-16 18:26:17.365799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.200 [2024-05-16 18:26:17.477129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.200 [2024-05-16 18:26:17.477140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.200 [2024-05-16 18:26:17.531305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:04.200 [2024-05-16 18:26:17.531390] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.200 [2024-05-16 18:26:17.531405] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:06.747 18:26:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60025 /var/tmp/spdk-nbd.sock 00:05:06.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.747 18:26:20 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 60025 ']' 00:05:06.747 18:26:20 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.747 18:26:20 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:06.747 18:26:20 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.747 18:26:20 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:06.747 18:26:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:07.017 18:26:20 event.app_repeat -- event/event.sh@39 -- # killprocess 60025 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 60025 ']' 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 60025 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60025 00:05:07.017 killing process with pid 60025 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60025' 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@965 -- # kill 60025 00:05:07.017 18:26:20 event.app_repeat -- common/autotest_common.sh@970 -- # wait 60025 00:05:07.289 spdk_app_start is called in Round 0. 00:05:07.289 Shutdown signal received, stop current app iteration 00:05:07.289 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:07.289 spdk_app_start is called in Round 1. 00:05:07.289 Shutdown signal received, stop current app iteration 00:05:07.289 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:07.289 spdk_app_start is called in Round 2. 00:05:07.289 Shutdown signal received, stop current app iteration 00:05:07.289 Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 reinitialization... 00:05:07.289 spdk_app_start is called in Round 3. 00:05:07.289 Shutdown signal received, stop current app iteration 00:05:07.289 ************************************ 00:05:07.289 END TEST app_repeat 00:05:07.289 ************************************ 00:05:07.289 18:26:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:07.289 18:26:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:07.289 00:05:07.289 real 0m19.344s 00:05:07.289 user 0m43.376s 00:05:07.289 sys 0m2.982s 00:05:07.289 18:26:20 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.289 18:26:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 18:26:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:07.289 18:26:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:07.289 18:26:20 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.289 18:26:20 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.289 18:26:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.289 ************************************ 00:05:07.289 START TEST cpu_locks 00:05:07.289 ************************************ 00:05:07.289 18:26:20 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:07.289 * Looking for test storage... 00:05:07.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:07.289 18:26:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:07.289 18:26:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:07.289 18:26:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:07.289 18:26:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:07.289 18:26:20 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.289 18:26:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.289 18:26:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.551 ************************************ 00:05:07.551 START TEST default_locks 00:05:07.551 ************************************ 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60458 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60458 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 60458 ']' 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:07.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:07.551 18:26:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.551 [2024-05-16 18:26:20.848109] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:07.551 [2024-05-16 18:26:20.848217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60458 ] 00:05:07.551 [2024-05-16 18:26:20.981631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.819 [2024-05-16 18:26:21.102735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.819 [2024-05-16 18:26:21.159561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:08.387 18:26:21 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:08.387 18:26:21 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:08.387 18:26:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60458 00:05:08.387 18:26:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60458 00:05:08.387 18:26:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60458 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 60458 ']' 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 60458 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60458 00:05:08.647 killing process with pid 60458 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60458' 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 60458 00:05:08.647 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 60458 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60458 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60458 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60458 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 60458 ']' 00:05:09.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:09.215 ERROR: process (pid: 60458) is no longer running 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (60458) - No such process 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:09.215 00:05:09.215 real 0m1.765s 00:05:09.215 user 0m1.839s 00:05:09.215 sys 0m0.527s 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:09.215 18:26:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.215 ************************************ 00:05:09.215 END TEST default_locks 00:05:09.215 ************************************ 00:05:09.215 18:26:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:09.215 18:26:22 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:09.215 18:26:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:09.215 18:26:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.215 ************************************ 00:05:09.215 START TEST default_locks_via_rpc 00:05:09.215 ************************************ 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60510 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60510 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60510 ']' 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:09.215 18:26:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.215 [2024-05-16 18:26:22.679703] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:09.215 [2024-05-16 18:26:22.679811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60510 ] 00:05:09.475 [2024-05-16 18:26:22.822439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.475 [2024-05-16 18:26:22.954584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.733 [2024-05-16 18:26:23.015430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60510 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60510 00:05:10.299 18:26:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60510 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 60510 ']' 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 60510 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60510 00:05:10.865 killing process with pid 60510 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60510' 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 60510 00:05:10.865 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 60510 00:05:11.125 00:05:11.125 real 0m1.873s 00:05:11.125 user 0m1.985s 00:05:11.125 sys 0m0.545s 00:05:11.125 ************************************ 00:05:11.125 END TEST default_locks_via_rpc 00:05:11.125 ************************************ 00:05:11.125 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.125 18:26:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.125 18:26:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:11.125 18:26:24 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.125 18:26:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.125 18:26:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.125 ************************************ 00:05:11.125 START TEST non_locking_app_on_locked_coremask 00:05:11.125 ************************************ 00:05:11.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60561 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60561 /var/tmp/spdk.sock 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60561 ']' 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:11.125 18:26:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.125 [2024-05-16 18:26:24.599497] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:11.125 [2024-05-16 18:26:24.599607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60561 ] 00:05:11.384 [2024-05-16 18:26:24.736547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.384 [2024-05-16 18:26:24.849196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.643 [2024-05-16 18:26:24.902411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:12.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60577 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60577 /var/tmp/spdk2.sock 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60577 ']' 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.210 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:12.211 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.211 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:12.211 18:26:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.211 [2024-05-16 18:26:25.578447] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:12.211 [2024-05-16 18:26:25.578814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60577 ] 00:05:12.469 [2024-05-16 18:26:25.719168] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.469 [2024-05-16 18:26:25.719248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.469 [2024-05-16 18:26:25.948386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.728 [2024-05-16 18:26:26.053601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:13.296 18:26:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:13.296 18:26:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:13.296 18:26:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60561 00:05:13.296 18:26:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60561 00:05:13.296 18:26:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60561 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60561 ']' 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 60561 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60561 00:05:14.229 killing process with pid 60561 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60561' 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 60561 00:05:14.229 18:26:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 60561 00:05:14.812 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60577 00:05:14.812 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60577 ']' 00:05:14.812 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 60577 00:05:14.812 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:14.813 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.813 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60577 00:05:14.813 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.813 killing process with pid 60577 00:05:14.813 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.813 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60577' 00:05:14.813 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 60577 00:05:14.813 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 60577 00:05:15.377 00:05:15.377 real 0m4.049s 00:05:15.377 user 0m4.488s 00:05:15.377 sys 0m1.108s 00:05:15.377 ************************************ 00:05:15.377 END TEST non_locking_app_on_locked_coremask 00:05:15.377 ************************************ 00:05:15.377 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.378 18:26:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.378 18:26:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.378 18:26:28 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.378 18:26:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.378 18:26:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.378 ************************************ 00:05:15.378 START TEST locking_app_on_unlocked_coremask 00:05:15.378 ************************************ 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60644 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60644 /var/tmp/spdk.sock 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60644 ']' 00:05:15.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:15.378 18:26:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.378 [2024-05-16 18:26:28.687229] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:15.378 [2024-05-16 18:26:28.687734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60644 ] 00:05:15.378 [2024-05-16 18:26:28.822663] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.378 [2024-05-16 18:26:28.822735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.636 [2024-05-16 18:26:28.936000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.636 [2024-05-16 18:26:28.989461] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60660 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60660 /var/tmp/spdk2.sock 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60660 ']' 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.203 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.204 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.204 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.204 18:26:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.462 [2024-05-16 18:26:29.731884] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:16.462 [2024-05-16 18:26:29.731994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60660 ] 00:05:16.462 [2024-05-16 18:26:29.879167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.720 [2024-05-16 18:26:30.103708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.720 [2024-05-16 18:26:30.213869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:17.285 18:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.285 18:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:17.285 18:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60660 00:05:17.285 18:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60660 00:05:17.286 18:26:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60644 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60644 ']' 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 60644 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60644 00:05:18.220 killing process with pid 60644 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60644' 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 60644 00:05:18.220 18:26:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 60644 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60660 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60660 ']' 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 60660 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60660 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:19.155 killing process with pid 60660 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60660' 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 60660 00:05:19.155 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 60660 00:05:19.415 00:05:19.415 real 0m4.160s 00:05:19.415 user 0m4.603s 00:05:19.415 sys 0m1.098s 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.415 ************************************ 00:05:19.415 END TEST locking_app_on_unlocked_coremask 00:05:19.415 ************************************ 00:05:19.415 18:26:32 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:19.415 18:26:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.415 18:26:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.415 18:26:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.415 ************************************ 00:05:19.415 START TEST locking_app_on_locked_coremask 00:05:19.415 ************************************ 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60727 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60727 /var/tmp/spdk.sock 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60727 ']' 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:19.415 18:26:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.673 [2024-05-16 18:26:32.985963] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:19.673 [2024-05-16 18:26:32.986089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60727 ] 00:05:19.673 [2024-05-16 18:26:33.126947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.933 [2024-05-16 18:26:33.260049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.933 [2024-05-16 18:26:33.319901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60743 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60743 /var/tmp/spdk2.sock 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60743 /var/tmp/spdk2.sock 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:20.510 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60743 /var/tmp/spdk2.sock 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 60743 ']' 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:20.511 18:26:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.511 [2024-05-16 18:26:33.980591] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:20.511 [2024-05-16 18:26:33.981025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:05:20.770 [2024-05-16 18:26:34.124188] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60727 has claimed it. 00:05:20.770 [2024-05-16 18:26:34.124261] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:21.338 ERROR: process (pid: 60743) is no longer running 00:05:21.338 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (60743) - No such process 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60727 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60727 00:05:21.338 18:26:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60727 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 60727 ']' 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 60727 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60727 00:05:21.597 killing process with pid 60727 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60727' 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 60727 00:05:21.597 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 60727 00:05:22.165 00:05:22.165 real 0m2.586s 00:05:22.165 user 0m2.922s 00:05:22.165 sys 0m0.656s 00:05:22.165 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.165 18:26:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.165 ************************************ 00:05:22.165 END TEST locking_app_on_locked_coremask 00:05:22.165 ************************************ 00:05:22.165 18:26:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:22.165 18:26:35 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.165 18:26:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.165 18:26:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.165 ************************************ 00:05:22.165 START TEST locking_overlapped_coremask 00:05:22.165 ************************************ 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60789 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60789 /var/tmp/spdk.sock 00:05:22.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 60789 ']' 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:22.166 18:26:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.166 [2024-05-16 18:26:35.568499] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:22.166 [2024-05-16 18:26:35.568608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60789 ] 00:05:22.424 [2024-05-16 18:26:35.706700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.424 [2024-05-16 18:26:35.838486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.424 [2024-05-16 18:26:35.838623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.424 [2024-05-16 18:26:35.838635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.424 [2024-05-16 18:26:35.898073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:23.360 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:23.360 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60811 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60811 /var/tmp/spdk2.sock 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60811 /var/tmp/spdk2.sock 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 60811 /var/tmp/spdk2.sock 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 60811 ']' 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.361 18:26:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.361 [2024-05-16 18:26:36.698287] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:23.361 [2024-05-16 18:26:36.698859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60811 ] 00:05:23.361 [2024-05-16 18:26:36.847712] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60789 has claimed it. 00:05:23.361 [2024-05-16 18:26:36.847779] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.297 ERROR: process (pid: 60811) is no longer running 00:05:24.297 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (60811) - No such process 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60789 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 60789 ']' 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 60789 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60789 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60789' 00:05:24.297 killing process with pid 60789 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 60789 00:05:24.297 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 60789 00:05:24.557 00:05:24.557 real 0m2.488s 00:05:24.557 user 0m7.043s 00:05:24.557 sys 0m0.465s 00:05:24.557 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.557 18:26:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.557 ************************************ 00:05:24.557 END TEST locking_overlapped_coremask 00:05:24.557 ************************************ 00:05:24.557 18:26:38 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:24.557 18:26:38 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.557 18:26:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.557 18:26:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.557 ************************************ 00:05:24.557 START TEST locking_overlapped_coremask_via_rpc 00:05:24.557 ************************************ 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60852 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60852 /var/tmp/spdk.sock 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60852 ']' 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.557 18:26:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.816 [2024-05-16 18:26:38.110615] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:24.816 [2024-05-16 18:26:38.110719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60852 ] 00:05:24.816 [2024-05-16 18:26:38.248851] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.816 [2024-05-16 18:26:38.248902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.074 [2024-05-16 18:26:38.363699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.074 [2024-05-16 18:26:38.363860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.074 [2024-05-16 18:26:38.363863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.074 [2024-05-16 18:26:38.421890] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60870 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60870 /var/tmp/spdk2.sock 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60870 ']' 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.641 18:26:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.899 [2024-05-16 18:26:39.142486] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:25.899 [2024-05-16 18:26:39.142829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60870 ] 00:05:25.899 [2024-05-16 18:26:39.282927] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.899 [2024-05-16 18:26:39.282977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.158 [2024-05-16 18:26:39.524292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.158 [2024-05-16 18:26:39.524418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.158 [2024-05-16 18:26:39.524418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:26.158 [2024-05-16 18:26:39.634262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.727 [2024-05-16 18:26:40.154946] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60852 has claimed it. 00:05:26.727 request: 00:05:26.727 { 00:05:26.727 "method": "framework_enable_cpumask_locks", 00:05:26.727 "req_id": 1 00:05:26.727 } 00:05:26.727 Got JSON-RPC error response 00:05:26.727 response: 00:05:26.727 { 00:05:26.727 "code": -32603, 00:05:26.727 "message": "Failed to claim CPU core: 2" 00:05:26.727 } 00:05:26.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60852 /var/tmp/spdk.sock 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60852 ']' 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.727 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60870 /var/tmp/spdk2.sock 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 60870 ']' 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.986 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.245 00:05:27.245 real 0m2.641s 00:05:27.245 user 0m1.362s 00:05:27.245 sys 0m0.203s 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.245 18:26:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.245 ************************************ 00:05:27.245 END TEST locking_overlapped_coremask_via_rpc 00:05:27.245 ************************************ 00:05:27.245 18:26:40 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:27.245 18:26:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60852 ]] 00:05:27.245 18:26:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60852 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60852 ']' 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60852 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60852 00:05:27.245 killing process with pid 60852 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60852' 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 60852 00:05:27.245 18:26:40 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 60852 00:05:27.812 18:26:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60870 ]] 00:05:27.812 18:26:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60870 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60870 ']' 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60870 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60870 00:05:27.812 killing process with pid 60870 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60870' 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 60870 00:05:27.812 18:26:41 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 60870 00:05:28.380 18:26:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.380 Process with pid 60852 is not found 00:05:28.380 Process with pid 60870 is not found 00:05:28.380 18:26:41 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:28.380 18:26:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60852 ]] 00:05:28.380 18:26:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60852 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60852 ']' 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60852 00:05:28.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (60852) - No such process 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 60852 is not found' 00:05:28.380 18:26:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60870 ]] 00:05:28.380 18:26:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60870 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 60870 ']' 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 60870 00:05:28.380 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (60870) - No such process 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 60870 is not found' 00:05:28.380 18:26:41 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.380 00:05:28.380 real 0m20.903s 00:05:28.380 user 0m36.684s 00:05:28.380 sys 0m5.467s 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.380 18:26:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.380 ************************************ 00:05:28.380 END TEST cpu_locks 00:05:28.380 ************************************ 00:05:28.380 00:05:28.380 real 0m47.936s 00:05:28.380 user 1m32.666s 00:05:28.380 sys 0m9.223s 00:05:28.380 ************************************ 00:05:28.380 END TEST event 00:05:28.380 ************************************ 00:05:28.380 18:26:41 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.380 18:26:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.380 18:26:41 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:28.380 18:26:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.380 18:26:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.380 18:26:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.380 ************************************ 00:05:28.380 START TEST thread 00:05:28.380 ************************************ 00:05:28.380 18:26:41 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:28.380 * Looking for test storage... 00:05:28.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:28.380 18:26:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.380 18:26:41 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:28.380 18:26:41 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.380 18:26:41 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.380 ************************************ 00:05:28.380 START TEST thread_poller_perf 00:05:28.380 ************************************ 00:05:28.380 18:26:41 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.380 [2024-05-16 18:26:41.812319] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:28.380 [2024-05-16 18:26:41.812413] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60993 ] 00:05:28.639 [2024-05-16 18:26:41.950818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.639 [2024-05-16 18:26:42.075212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.639 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:30.018 ====================================== 00:05:30.018 busy:2207736068 (cyc) 00:05:30.018 total_run_count: 299000 00:05:30.018 tsc_hz: 2200000000 (cyc) 00:05:30.018 ====================================== 00:05:30.018 poller_cost: 7383 (cyc), 3355 (nsec) 00:05:30.018 00:05:30.018 real 0m1.410s 00:05:30.018 user 0m1.241s 00:05:30.018 sys 0m0.061s 00:05:30.018 18:26:43 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.018 18:26:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.018 ************************************ 00:05:30.018 END TEST thread_poller_perf 00:05:30.018 ************************************ 00:05:30.018 18:26:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.018 18:26:43 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:30.018 18:26:43 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:30.018 18:26:43 thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.018 ************************************ 00:05:30.018 START TEST thread_poller_perf 00:05:30.018 ************************************ 00:05:30.018 18:26:43 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:30.018 [2024-05-16 18:26:43.272153] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:30.018 [2024-05-16 18:26:43.272256] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61028 ] 00:05:30.018 [2024-05-16 18:26:43.411895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.278 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:30.278 [2024-05-16 18:26:43.522705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.268 ====================================== 00:05:31.268 busy:2202297976 (cyc) 00:05:31.268 total_run_count: 4197000 00:05:31.268 tsc_hz: 2200000000 (cyc) 00:05:31.268 ====================================== 00:05:31.268 poller_cost: 524 (cyc), 238 (nsec) 00:05:31.268 00:05:31.268 real 0m1.360s 00:05:31.268 user 0m1.193s 00:05:31.268 sys 0m0.059s 00:05:31.268 18:26:44 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.268 ************************************ 00:05:31.268 END TEST thread_poller_perf 00:05:31.268 ************************************ 00:05:31.268 18:26:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.268 18:26:44 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:31.268 ************************************ 00:05:31.268 END TEST thread 00:05:31.268 ************************************ 00:05:31.268 00:05:31.268 real 0m2.959s 00:05:31.268 user 0m2.498s 00:05:31.268 sys 0m0.240s 00:05:31.268 18:26:44 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.268 18:26:44 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.268 18:26:44 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:31.268 18:26:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.268 18:26:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.268 18:26:44 -- common/autotest_common.sh@10 -- # set +x 00:05:31.268 ************************************ 00:05:31.268 START TEST accel 00:05:31.268 ************************************ 00:05:31.268 18:26:44 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:31.527 * Looking for test storage... 00:05:31.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:31.527 18:26:44 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:31.527 18:26:44 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:31.527 18:26:44 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.527 18:26:44 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61103 00:05:31.527 18:26:44 accel -- accel/accel.sh@63 -- # waitforlisten 61103 00:05:31.527 18:26:44 accel -- common/autotest_common.sh@827 -- # '[' -z 61103 ']' 00:05:31.527 18:26:44 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.527 18:26:44 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.527 18:26:44 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.527 18:26:44 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:31.527 18:26:44 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:31.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.527 18:26:44 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.527 18:26:44 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.527 18:26:44 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.527 18:26:44 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.527 18:26:44 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.527 18:26:44 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.527 18:26:44 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.527 18:26:44 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:31.527 18:26:44 accel -- accel/accel.sh@41 -- # jq -r . 00:05:31.527 [2024-05-16 18:26:44.836721] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:31.527 [2024-05-16 18:26:44.836825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61103 ] 00:05:31.527 [2024-05-16 18:26:44.968298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.787 [2024-05-16 18:26:45.086253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.787 [2024-05-16 18:26:45.141066] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:32.355 18:26:45 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.355 18:26:45 accel -- common/autotest_common.sh@860 -- # return 0 00:05:32.355 18:26:45 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:32.355 18:26:45 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:32.355 18:26:45 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:32.355 18:26:45 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:32.355 18:26:45 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:32.355 18:26:45 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:32.355 18:26:45 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:32.355 18:26:45 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.355 18:26:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.355 18:26:45 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.355 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.355 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.355 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.355 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.355 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.355 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.355 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.355 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.355 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.355 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.356 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.356 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # IFS== 00:05:32.614 18:26:45 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:32.614 18:26:45 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:32.614 18:26:45 accel -- accel/accel.sh@75 -- # killprocess 61103 00:05:32.614 18:26:45 accel -- common/autotest_common.sh@946 -- # '[' -z 61103 ']' 00:05:32.614 18:26:45 accel -- common/autotest_common.sh@950 -- # kill -0 61103 00:05:32.614 18:26:45 accel -- common/autotest_common.sh@951 -- # uname 00:05:32.614 18:26:45 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:32.615 18:26:45 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61103 00:05:32.615 18:26:45 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:32.615 killing process with pid 61103 00:05:32.615 18:26:45 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:32.615 18:26:45 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61103' 00:05:32.615 18:26:45 accel -- common/autotest_common.sh@965 -- # kill 61103 00:05:32.615 18:26:45 accel -- common/autotest_common.sh@970 -- # wait 61103 00:05:32.874 18:26:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:32.874 18:26:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:32.874 18:26:46 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:32.874 18:26:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.874 18:26:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.874 18:26:46 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:32.874 18:26:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:32.874 18:26:46 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:32.874 18:26:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:33.134 18:26:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:33.134 18:26:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:33.134 18:26:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.134 18:26:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.134 ************************************ 00:05:33.134 START TEST accel_missing_filename 00:05:33.134 ************************************ 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.134 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:33.134 18:26:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:33.134 [2024-05-16 18:26:46.426843] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:33.134 [2024-05-16 18:26:46.426945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61154 ] 00:05:33.134 [2024-05-16 18:26:46.566112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.394 [2024-05-16 18:26:46.682844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.394 [2024-05-16 18:26:46.736961] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.394 [2024-05-16 18:26:46.812474] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:33.652 A filename is required. 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.652 00:05:33.652 real 0m0.503s 00:05:33.652 user 0m0.338s 00:05:33.652 sys 0m0.110s 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.652 18:26:46 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:33.652 ************************************ 00:05:33.652 END TEST accel_missing_filename 00:05:33.652 ************************************ 00:05:33.652 18:26:46 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.652 18:26:46 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:33.652 18:26:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:33.652 18:26:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.652 ************************************ 00:05:33.652 START TEST accel_compress_verify 00:05:33.652 ************************************ 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.652 18:26:46 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:33.652 18:26:46 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:33.652 [2024-05-16 18:26:46.974053] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:33.652 [2024-05-16 18:26:46.974154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61179 ] 00:05:33.652 [2024-05-16 18:26:47.103810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.911 [2024-05-16 18:26:47.213289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.911 [2024-05-16 18:26:47.268966] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.911 [2024-05-16 18:26:47.346421] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:34.170 00:05:34.170 Compression does not support the verify option, aborting. 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.170 00:05:34.170 real 0m0.486s 00:05:34.170 user 0m0.313s 00:05:34.170 sys 0m0.113s 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.170 18:26:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:34.170 ************************************ 00:05:34.170 END TEST accel_compress_verify 00:05:34.170 ************************************ 00:05:34.170 18:26:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:34.170 18:26:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:34.170 18:26:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.170 18:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.170 ************************************ 00:05:34.170 START TEST accel_wrong_workload 00:05:34.170 ************************************ 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.170 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:34.170 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:34.171 18:26:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:34.171 Unsupported workload type: foobar 00:05:34.171 [2024-05-16 18:26:47.509603] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:34.171 accel_perf options: 00:05:34.171 [-h help message] 00:05:34.171 [-q queue depth per core] 00:05:34.171 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:34.171 [-T number of threads per core 00:05:34.171 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:34.171 [-t time in seconds] 00:05:34.171 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:34.171 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:34.171 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:34.171 [-l for compress/decompress workloads, name of uncompressed input file 00:05:34.171 [-S for crc32c workload, use this seed value (default 0) 00:05:34.171 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:34.171 [-f for fill workload, use this BYTE value (default 255) 00:05:34.171 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:34.171 [-y verify result if this switch is on] 00:05:34.171 [-a tasks to allocate per core (default: same value as -q)] 00:05:34.171 Can be used to spread operations across a wider range of memory. 00:05:34.171 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:34.171 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.171 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.171 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.171 00:05:34.171 real 0m0.032s 00:05:34.171 user 0m0.015s 00:05:34.171 sys 0m0.016s 00:05:34.171 ************************************ 00:05:34.171 END TEST accel_wrong_workload 00:05:34.171 ************************************ 00:05:34.171 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.171 18:26:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:34.171 18:26:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:34.171 18:26:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:34.171 18:26:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.171 18:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.171 ************************************ 00:05:34.171 START TEST accel_negative_buffers 00:05:34.171 ************************************ 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:34.171 18:26:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:34.171 -x option must be non-negative. 00:05:34.171 [2024-05-16 18:26:47.590072] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:34.171 accel_perf options: 00:05:34.171 [-h help message] 00:05:34.171 [-q queue depth per core] 00:05:34.171 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:34.171 [-T number of threads per core 00:05:34.171 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:34.171 [-t time in seconds] 00:05:34.171 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:34.171 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:34.171 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:34.171 [-l for compress/decompress workloads, name of uncompressed input file 00:05:34.171 [-S for crc32c workload, use this seed value (default 0) 00:05:34.171 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:34.171 [-f for fill workload, use this BYTE value (default 255) 00:05:34.171 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:34.171 [-y verify result if this switch is on] 00:05:34.171 [-a tasks to allocate per core (default: same value as -q)] 00:05:34.171 Can be used to spread operations across a wider range of memory. 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.171 00:05:34.171 real 0m0.033s 00:05:34.171 user 0m0.017s 00:05:34.171 sys 0m0.016s 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.171 ************************************ 00:05:34.171 END TEST accel_negative_buffers 00:05:34.171 ************************************ 00:05:34.171 18:26:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:34.171 18:26:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:34.171 18:26:47 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:34.171 18:26:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.171 18:26:47 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.171 ************************************ 00:05:34.171 START TEST accel_crc32c 00:05:34.171 ************************************ 00:05:34.171 18:26:47 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:34.171 18:26:47 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:34.171 [2024-05-16 18:26:47.669705] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:34.171 [2024-05-16 18:26:47.669807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:05:34.431 [2024-05-16 18:26:47.811042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.689 [2024-05-16 18:26:47.938464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.689 18:26:47 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.689 18:26:47 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.689 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.690 18:26:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:36.066 18:26:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.067 18:26:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:36.067 18:26:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.067 00:05:36.067 real 0m1.537s 00:05:36.067 user 0m1.316s 00:05:36.067 sys 0m0.126s 00:05:36.067 18:26:49 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.067 18:26:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:36.067 ************************************ 00:05:36.067 END TEST accel_crc32c 00:05:36.067 ************************************ 00:05:36.067 18:26:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:36.067 18:26:49 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:36.067 18:26:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:36.067 18:26:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.067 ************************************ 00:05:36.067 START TEST accel_crc32c_C2 00:05:36.067 ************************************ 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:36.067 [2024-05-16 18:26:49.253779] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:36.067 [2024-05-16 18:26:49.254067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61277 ] 00:05:36.067 [2024-05-16 18:26:49.384948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.067 [2024-05-16 18:26:49.499179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.067 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.326 18:26:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.263 ************************************ 00:05:37.263 END TEST accel_crc32c_C2 00:05:37.263 ************************************ 00:05:37.263 00:05:37.263 real 0m1.497s 00:05:37.263 user 0m1.290s 00:05:37.263 sys 0m0.115s 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:37.263 18:26:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:37.522 18:26:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:37.522 18:26:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:37.522 18:26:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:37.522 18:26:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.522 ************************************ 00:05:37.522 START TEST accel_copy 00:05:37.522 ************************************ 00:05:37.522 18:26:50 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.522 18:26:50 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.523 18:26:50 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.523 18:26:50 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.523 18:26:50 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.523 18:26:50 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:37.523 18:26:50 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:37.523 [2024-05-16 18:26:50.801546] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:37.523 [2024-05-16 18:26:50.801634] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61312 ] 00:05:37.523 [2024-05-16 18:26:50.933977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.784 [2024-05-16 18:26:51.064050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.784 18:26:51 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.785 18:26:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:39.163 ************************************ 00:05:39.163 END TEST accel_copy 00:05:39.163 ************************************ 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:39.163 18:26:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.163 00:05:39.163 real 0m1.530s 00:05:39.163 user 0m1.304s 00:05:39.163 sys 0m0.127s 00:05:39.163 18:26:52 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.163 18:26:52 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:39.163 18:26:52 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.163 18:26:52 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:39.163 18:26:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.163 18:26:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.163 ************************************ 00:05:39.163 START TEST accel_fill 00:05:39.163 ************************************ 00:05:39.163 18:26:52 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:39.163 18:26:52 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:39.163 [2024-05-16 18:26:52.392763] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:39.163 [2024-05-16 18:26:52.392897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61344 ] 00:05:39.163 [2024-05-16 18:26:52.528946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.163 [2024-05-16 18:26:52.648922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.422 18:26:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:39.423 18:26:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:40.800 ************************************ 00:05:40.800 END TEST accel_fill 00:05:40.800 ************************************ 00:05:40.800 18:26:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.800 00:05:40.800 real 0m1.520s 00:05:40.800 user 0m1.295s 00:05:40.800 sys 0m0.128s 00:05:40.800 18:26:53 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.800 18:26:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:40.800 18:26:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:40.800 18:26:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:40.800 18:26:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.800 18:26:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.800 ************************************ 00:05:40.800 START TEST accel_copy_crc32c 00:05:40.800 ************************************ 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.800 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.801 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.801 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.801 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:40.801 18:26:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:40.801 [2024-05-16 18:26:53.966765] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:40.801 [2024-05-16 18:26:53.966896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61381 ] 00:05:40.801 [2024-05-16 18:26:54.107407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.801 [2024-05-16 18:26:54.220718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.801 18:26:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:42.214 ************************************ 00:05:42.214 END TEST accel_copy_crc32c 00:05:42.214 ************************************ 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.214 00:05:42.214 real 0m1.521s 00:05:42.214 user 0m1.301s 00:05:42.214 sys 0m0.126s 00:05:42.214 18:26:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:42.215 18:26:55 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:42.215 18:26:55 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.215 18:26:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:42.215 18:26:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:42.215 18:26:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.215 ************************************ 00:05:42.215 START TEST accel_copy_crc32c_C2 00:05:42.215 ************************************ 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:42.215 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:42.215 [2024-05-16 18:26:55.538968] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:42.215 [2024-05-16 18:26:55.539064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61410 ] 00:05:42.215 [2024-05-16 18:26:55.671620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.474 [2024-05-16 18:26:55.804144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:42.474 18:26:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.928 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.929 00:05:43.929 real 0m1.527s 00:05:43.929 user 0m1.306s 00:05:43.929 sys 0m0.127s 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.929 18:26:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:43.929 ************************************ 00:05:43.929 END TEST accel_copy_crc32c_C2 00:05:43.929 ************************************ 00:05:43.929 18:26:57 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:43.929 18:26:57 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:43.929 18:26:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.929 18:26:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.929 ************************************ 00:05:43.929 START TEST accel_dualcast 00:05:43.929 ************************************ 00:05:43.929 18:26:57 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:43.929 18:26:57 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:43.929 [2024-05-16 18:26:57.121980] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:43.929 [2024-05-16 18:26:57.122139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61450 ] 00:05:43.929 [2024-05-16 18:26:57.264711] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.929 [2024-05-16 18:26:57.381599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:44.188 18:26:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.567 ************************************ 00:05:45.567 END TEST accel_dualcast 00:05:45.567 ************************************ 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:45.567 18:26:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.567 00:05:45.567 real 0m1.536s 00:05:45.567 user 0m1.318s 00:05:45.567 sys 0m0.120s 00:05:45.567 18:26:58 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:45.567 18:26:58 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:45.567 18:26:58 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:45.567 18:26:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:45.567 18:26:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:45.567 18:26:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.567 ************************************ 00:05:45.567 START TEST accel_compare 00:05:45.567 ************************************ 00:05:45.567 18:26:58 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:45.567 18:26:58 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:45.567 [2024-05-16 18:26:58.707547] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:45.567 [2024-05-16 18:26:58.707642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61483 ] 00:05:45.567 [2024-05-16 18:26:58.844027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.567 [2024-05-16 18:26:58.966816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:45.567 18:26:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:46.946 18:27:00 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.946 00:05:46.946 real 0m1.525s 00:05:46.946 user 0m1.300s 00:05:46.946 sys 0m0.126s 00:05:46.946 ************************************ 00:05:46.946 END TEST accel_compare 00:05:46.946 ************************************ 00:05:46.946 18:27:00 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.946 18:27:00 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:46.946 18:27:00 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:46.946 18:27:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:46.946 18:27:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.946 18:27:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.946 ************************************ 00:05:46.946 START TEST accel_xor 00:05:46.946 ************************************ 00:05:46.946 18:27:00 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:46.946 18:27:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:46.946 [2024-05-16 18:27:00.284414] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:46.946 [2024-05-16 18:27:00.284515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61519 ] 00:05:46.946 [2024-05-16 18:27:00.424395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.205 [2024-05-16 18:27:00.542393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.205 18:27:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.605 00:05:48.605 real 0m1.534s 00:05:48.605 user 0m1.316s 00:05:48.605 sys 0m0.121s 00:05:48.605 18:27:01 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.605 18:27:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:48.605 ************************************ 00:05:48.605 END TEST accel_xor 00:05:48.605 ************************************ 00:05:48.605 18:27:01 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:48.605 18:27:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:48.605 18:27:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.605 18:27:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.605 ************************************ 00:05:48.605 START TEST accel_xor 00:05:48.605 ************************************ 00:05:48.605 18:27:01 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:48.605 18:27:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:48.605 [2024-05-16 18:27:01.866688] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:48.605 [2024-05-16 18:27:01.866778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61552 ] 00:05:48.605 [2024-05-16 18:27:02.007417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.913 [2024-05-16 18:27:02.174059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.913 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:48.914 18:27:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:50.292 18:27:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.292 00:05:50.292 real 0m1.671s 00:05:50.292 user 0m1.424s 00:05:50.292 sys 0m0.152s 00:05:50.292 18:27:03 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.292 18:27:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:50.292 ************************************ 00:05:50.292 END TEST accel_xor 00:05:50.292 ************************************ 00:05:50.292 18:27:03 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:50.292 18:27:03 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:50.292 18:27:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.292 18:27:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.292 ************************************ 00:05:50.292 START TEST accel_dif_verify 00:05:50.292 ************************************ 00:05:50.292 18:27:03 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:50.292 18:27:03 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:50.292 [2024-05-16 18:27:03.600181] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:50.292 [2024-05-16 18:27:03.600535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61588 ] 00:05:50.292 [2024-05-16 18:27:03.738659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.561 [2024-05-16 18:27:03.892130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:50.561 18:27:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 ************************************ 00:05:51.955 END TEST accel_dif_verify 00:05:51.955 ************************************ 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:51.955 18:27:05 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.955 00:05:51.955 real 0m1.586s 00:05:51.955 user 0m1.341s 00:05:51.955 sys 0m0.149s 00:05:51.955 18:27:05 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.955 18:27:05 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:51.955 18:27:05 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:51.955 18:27:05 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:51.955 18:27:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.955 18:27:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.955 ************************************ 00:05:51.955 START TEST accel_dif_generate 00:05:51.955 ************************************ 00:05:51.955 18:27:05 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:51.955 18:27:05 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:51.955 [2024-05-16 18:27:05.236684] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:51.955 [2024-05-16 18:27:05.236794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61628 ] 00:05:51.955 [2024-05-16 18:27:05.376938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.214 [2024-05-16 18:27:05.503272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.214 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:52.215 18:27:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.594 ************************************ 00:05:53.594 END TEST accel_dif_generate 00:05:53.594 ************************************ 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:53.594 18:27:06 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.594 00:05:53.594 real 0m1.551s 00:05:53.594 user 0m1.324s 00:05:53.594 sys 0m0.134s 00:05:53.594 18:27:06 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.594 18:27:06 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:53.594 18:27:06 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:53.594 18:27:06 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:53.594 18:27:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:53.594 18:27:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.594 ************************************ 00:05:53.594 START TEST accel_dif_generate_copy 00:05:53.594 ************************************ 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:53.594 18:27:06 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:53.594 [2024-05-16 18:27:06.838628] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:53.594 [2024-05-16 18:27:06.838715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61657 ] 00:05:53.594 [2024-05-16 18:27:06.970227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.854 [2024-05-16 18:27:07.105135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.854 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:53.855 18:27:07 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.233 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.234 00:05:55.234 real 0m1.542s 00:05:55.234 user 0m1.314s 00:05:55.234 sys 0m0.134s 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.234 ************************************ 00:05:55.234 END TEST accel_dif_generate_copy 00:05:55.234 ************************************ 00:05:55.234 18:27:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:55.234 18:27:08 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:55.234 18:27:08 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.234 18:27:08 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:55.234 18:27:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.234 18:27:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.234 ************************************ 00:05:55.234 START TEST accel_comp 00:05:55.234 ************************************ 00:05:55.234 18:27:08 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:55.234 18:27:08 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:55.234 [2024-05-16 18:27:08.435634] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:55.234 [2024-05-16 18:27:08.435771] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61697 ] 00:05:55.234 [2024-05-16 18:27:08.576976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.234 [2024-05-16 18:27:08.711398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.493 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:55.494 18:27:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:56.872 ************************************ 00:05:56.872 END TEST accel_comp 00:05:56.872 ************************************ 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:56.872 18:27:09 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.872 00:05:56.872 real 0m1.568s 00:05:56.872 user 0m1.336s 00:05:56.872 sys 0m0.139s 00:05:56.872 18:27:09 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:56.872 18:27:09 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:56.872 18:27:10 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.872 18:27:10 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:56.872 18:27:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:56.872 18:27:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.872 ************************************ 00:05:56.872 START TEST accel_decomp 00:05:56.872 ************************************ 00:05:56.872 18:27:10 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:56.872 18:27:10 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:56.873 [2024-05-16 18:27:10.057513] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:56.873 [2024-05-16 18:27:10.057604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61728 ] 00:05:56.873 [2024-05-16 18:27:10.194403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.873 [2024-05-16 18:27:10.320820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.132 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:57.133 18:27:10 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.070 18:27:11 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.070 00:05:58.070 real 0m1.535s 00:05:58.070 user 0m1.316s 00:05:58.070 sys 0m0.127s 00:05:58.070 18:27:11 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.070 ************************************ 00:05:58.070 END TEST accel_decomp 00:05:58.070 ************************************ 00:05:58.070 18:27:11 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:58.331 18:27:11 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.331 18:27:11 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:58.331 18:27:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.331 18:27:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.331 ************************************ 00:05:58.331 START TEST accel_decmop_full 00:05:58.331 ************************************ 00:05:58.331 18:27:11 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:58.331 18:27:11 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:58.331 [2024-05-16 18:27:11.647179] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:58.331 [2024-05-16 18:27:11.647282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61768 ] 00:05:58.331 [2024-05-16 18:27:11.782870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.590 [2024-05-16 18:27:11.917378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.590 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:58.591 18:27:12 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:59.968 ************************************ 00:05:59.968 END TEST accel_decmop_full 00:05:59.968 ************************************ 00:05:59.968 18:27:13 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.968 00:05:59.968 real 0m1.581s 00:05:59.968 user 0m1.346s 00:05:59.968 sys 0m0.141s 00:05:59.968 18:27:13 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.969 18:27:13 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:59.969 18:27:13 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.969 18:27:13 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:59.969 18:27:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.969 18:27:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.969 ************************************ 00:05:59.969 START TEST accel_decomp_mcore 00:05:59.969 ************************************ 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:59.969 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:59.969 [2024-05-16 18:27:13.276686] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:05:59.969 [2024-05-16 18:27:13.276802] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61803 ] 00:05:59.969 [2024-05-16 18:27:13.416513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.228 [2024-05-16 18:27:13.541046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.228 [2024-05-16 18:27:13.541231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.228 [2024-05-16 18:27:13.542509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.228 [2024-05-16 18:27:13.542529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.228 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.228 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:00.229 18:27:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.608 00:06:01.608 real 0m1.543s 00:06:01.608 user 0m0.019s 00:06:01.608 sys 0m0.003s 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.608 ************************************ 00:06:01.608 END TEST accel_decomp_mcore 00:06:01.608 ************************************ 00:06:01.608 18:27:14 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 18:27:14 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.608 18:27:14 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:01.608 18:27:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.608 18:27:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.608 ************************************ 00:06:01.608 START TEST accel_decomp_full_mcore 00:06:01.608 ************************************ 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:01.608 18:27:14 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:01.608 [2024-05-16 18:27:14.868771] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:01.608 [2024-05-16 18:27:14.868887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61840 ] 00:06:01.608 [2024-05-16 18:27:15.006088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.868 [2024-05-16 18:27:15.127004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.869 [2024-05-16 18:27:15.127132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.869 [2024-05-16 18:27:15.127912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.869 [2024-05-16 18:27:15.127920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:01.869 18:27:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.248 00:06:03.248 real 0m1.546s 00:06:03.248 user 0m4.755s 00:06:03.248 sys 0m0.139s 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.248 ************************************ 00:06:03.248 18:27:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:03.248 END TEST accel_decomp_full_mcore 00:06:03.248 ************************************ 00:06:03.248 18:27:16 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.248 18:27:16 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:03.248 18:27:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.248 18:27:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.248 ************************************ 00:06:03.248 START TEST accel_decomp_mthread 00:06:03.248 ************************************ 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:03.248 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:03.248 [2024-05-16 18:27:16.470936] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:03.248 [2024-05-16 18:27:16.471074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61878 ] 00:06:03.248 [2024-05-16 18:27:16.622350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.508 [2024-05-16 18:27:16.749202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.508 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:03.509 18:27:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.885 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.886 00:06:04.886 real 0m1.557s 00:06:04.886 user 0m1.320s 00:06:04.886 sys 0m0.138s 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.886 18:27:17 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:04.886 ************************************ 00:06:04.886 END TEST accel_decomp_mthread 00:06:04.886 ************************************ 00:06:04.886 18:27:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.886 18:27:18 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:04.886 18:27:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.886 18:27:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.886 ************************************ 00:06:04.886 START TEST accel_decomp_full_mthread 00:06:04.886 ************************************ 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:04.886 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:04.886 [2024-05-16 18:27:18.069406] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:04.886 [2024-05-16 18:27:18.069505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61912 ] 00:06:04.886 [2024-05-16 18:27:18.202521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.886 [2024-05-16 18:27:18.336073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.144 18:27:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.520 00:06:06.520 real 0m1.565s 00:06:06.520 user 0m1.343s 00:06:06.520 sys 0m0.129s 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.520 18:27:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:06.520 ************************************ 00:06:06.520 END TEST accel_decomp_full_mthread 00:06:06.520 ************************************ 00:06:06.520 18:27:19 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:06.520 18:27:19 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.520 18:27:19 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:06.520 18:27:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.520 18:27:19 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:06.520 18:27:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.520 18:27:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.520 18:27:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.520 18:27:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.520 18:27:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.520 18:27:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:06.520 18:27:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:06.520 18:27:19 accel -- accel/accel.sh@41 -- # jq -r . 00:06:06.520 ************************************ 00:06:06.520 START TEST accel_dif_functional_tests 00:06:06.520 ************************************ 00:06:06.520 18:27:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:06.520 [2024-05-16 18:27:19.715478] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:06.520 [2024-05-16 18:27:19.715761] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61948 ] 00:06:06.520 [2024-05-16 18:27:19.856179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.521 [2024-05-16 18:27:19.983988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.521 [2024-05-16 18:27:19.984116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.521 [2024-05-16 18:27:19.984141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.779 [2024-05-16 18:27:20.062556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:06.779 00:06:06.779 00:06:06.779 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.779 http://cunit.sourceforge.net/ 00:06:06.779 00:06:06.779 00:06:06.779 Suite: accel_dif 00:06:06.779 Test: verify: DIF generated, GUARD check ...passed 00:06:06.779 Test: verify: DIF generated, APPTAG check ...passed 00:06:06.779 Test: verify: DIF generated, REFTAG check ...passed 00:06:06.779 Test: verify: DIF not generated, GUARD check ...passed 00:06:06.779 Test: verify: DIF not generated, APPTAG check ...[2024-05-16 18:27:20.108190] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.779 [2024-05-16 18:27:20.108306] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.779 passed 00:06:06.779 Test: verify: DIF not generated, REFTAG check ...passed 00:06:06.779 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:06.779 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:06.779 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-05-16 18:27:20.108374] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.779 [2024-05-16 18:27:20.108501] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:06.779 passed 00:06:06.779 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:06.779 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:06.779 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-16 18:27:20.108833] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:06.779 passed 00:06:06.779 Test: verify copy: DIF generated, GUARD check ...passed 00:06:06.779 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:06.779 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:06.779 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:06.779 Test: verify copy: DIF not generated, APPTAG check ...[2024-05-16 18:27:20.109544] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:06.779 passed 00:06:06.779 Test: verify copy: DIF not generated, REFTAG check ...[2024-05-16 18:27:20.109655] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:06.779 passed 00:06:06.779 Test: generate copy: DIF generated, GUARD check ...passed 00:06:06.779 Test: generate copy: DIF generated, APTTAG check ...[2024-05-16 18:27:20.109870] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:06.779 passed 00:06:06.779 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:06.779 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:06.779 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:06.779 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:06.779 Test: generate copy: iovecs-len validate ...[2024-05-16 18:27:20.110577] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:06.779 passed 00:06:06.779 Test: generate copy: buffer alignment validate ...passed 00:06:06.779 00:06:06.779 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.779 suites 1 1 n/a 0 0 00:06:06.779 tests 26 26 26 0 0 00:06:06.779 asserts 115 115 115 0 n/a 00:06:06.779 00:06:06.779 Elapsed time = 0.007 seconds 00:06:07.037 00:06:07.037 real 0m0.667s 00:06:07.037 user 0m0.913s 00:06:07.037 sys 0m0.183s 00:06:07.037 18:27:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.037 18:27:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:07.037 ************************************ 00:06:07.037 END TEST accel_dif_functional_tests 00:06:07.037 ************************************ 00:06:07.037 00:06:07.037 real 0m35.660s 00:06:07.037 user 0m37.154s 00:06:07.037 sys 0m4.274s 00:06:07.037 ************************************ 00:06:07.037 END TEST accel 00:06:07.037 ************************************ 00:06:07.037 18:27:20 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.037 18:27:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.037 18:27:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:07.037 18:27:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.037 18:27:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.037 18:27:20 -- common/autotest_common.sh@10 -- # set +x 00:06:07.037 ************************************ 00:06:07.037 START TEST accel_rpc 00:06:07.037 ************************************ 00:06:07.037 18:27:20 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:07.037 * Looking for test storage... 00:06:07.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:07.037 18:27:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.037 18:27:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62018 00:06:07.037 18:27:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:07.037 18:27:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62018 00:06:07.037 18:27:20 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 62018 ']' 00:06:07.037 18:27:20 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.038 18:27:20 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:07.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.038 18:27:20 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.038 18:27:20 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:07.038 18:27:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.296 [2024-05-16 18:27:20.551904] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:07.296 [2024-05-16 18:27:20.552001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62018 ] 00:06:07.296 [2024-05-16 18:27:20.685203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.555 [2024-05-16 18:27:20.799538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.122 18:27:21 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:08.122 18:27:21 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:08.122 18:27:21 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:08.122 18:27:21 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:08.122 18:27:21 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:08.122 18:27:21 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:08.122 18:27:21 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:08.122 18:27:21 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.122 18:27:21 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.122 18:27:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.122 ************************************ 00:06:08.122 START TEST accel_assign_opcode 00:06:08.122 ************************************ 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.122 [2024-05-16 18:27:21.552825] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.122 [2024-05-16 18:27:21.564816] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.122 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.381 [2024-05-16 18:27:21.626784] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.381 software 00:06:08.381 ************************************ 00:06:08.381 END TEST accel_assign_opcode 00:06:08.381 ************************************ 00:06:08.381 00:06:08.381 real 0m0.317s 00:06:08.381 user 0m0.060s 00:06:08.381 sys 0m0.009s 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.381 18:27:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:08.640 18:27:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62018 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 62018 ']' 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 62018 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62018 00:06:08.640 killing process with pid 62018 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62018' 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@965 -- # kill 62018 00:06:08.640 18:27:21 accel_rpc -- common/autotest_common.sh@970 -- # wait 62018 00:06:08.900 ************************************ 00:06:08.900 END TEST accel_rpc 00:06:08.900 ************************************ 00:06:08.900 00:06:08.900 real 0m1.929s 00:06:08.900 user 0m2.027s 00:06:08.900 sys 0m0.455s 00:06:08.900 18:27:22 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.900 18:27:22 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.900 18:27:22 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:08.900 18:27:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.900 18:27:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.900 18:27:22 -- common/autotest_common.sh@10 -- # set +x 00:06:08.900 ************************************ 00:06:08.900 START TEST app_cmdline 00:06:08.900 ************************************ 00:06:08.900 18:27:22 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:09.159 * Looking for test storage... 00:06:09.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:09.159 18:27:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.159 18:27:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62111 00:06:09.159 18:27:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62111 00:06:09.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.159 18:27:22 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 62111 ']' 00:06:09.159 18:27:22 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.159 18:27:22 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.159 18:27:22 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.159 18:27:22 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.159 18:27:22 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.159 18:27:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.159 [2024-05-16 18:27:22.528956] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:09.159 [2024-05-16 18:27:22.529970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:06:09.418 [2024-05-16 18:27:22.669168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.418 [2024-05-16 18:27:22.790062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.418 [2024-05-16 18:27:22.847989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:10.355 { 00:06:10.355 "version": "SPDK v24.09-pre git sha1 cf8ec7cfe", 00:06:10.355 "fields": { 00:06:10.355 "major": 24, 00:06:10.355 "minor": 9, 00:06:10.355 "patch": 0, 00:06:10.355 "suffix": "-pre", 00:06:10.355 "commit": "cf8ec7cfe" 00:06:10.355 } 00:06:10.355 } 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.355 18:27:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:10.355 18:27:23 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.613 request: 00:06:10.613 { 00:06:10.613 "method": "env_dpdk_get_mem_stats", 00:06:10.613 "req_id": 1 00:06:10.613 } 00:06:10.613 Got JSON-RPC error response 00:06:10.613 response: 00:06:10.613 { 00:06:10.613 "code": -32601, 00:06:10.613 "message": "Method not found" 00:06:10.613 } 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:10.872 18:27:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62111 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 62111 ']' 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 62111 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62111 00:06:10.872 killing process with pid 62111 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62111' 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@965 -- # kill 62111 00:06:10.872 18:27:24 app_cmdline -- common/autotest_common.sh@970 -- # wait 62111 00:06:11.130 00:06:11.130 real 0m2.146s 00:06:11.130 user 0m2.688s 00:06:11.130 sys 0m0.498s 00:06:11.130 18:27:24 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.130 ************************************ 00:06:11.130 END TEST app_cmdline 00:06:11.130 ************************************ 00:06:11.130 18:27:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.130 18:27:24 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:11.130 18:27:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.130 18:27:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.130 18:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.130 ************************************ 00:06:11.130 START TEST version 00:06:11.130 ************************************ 00:06:11.130 18:27:24 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:11.389 * Looking for test storage... 00:06:11.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:11.389 18:27:24 version -- app/version.sh@17 -- # get_header_version major 00:06:11.389 18:27:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # cut -f2 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.389 18:27:24 version -- app/version.sh@17 -- # major=24 00:06:11.389 18:27:24 version -- app/version.sh@18 -- # get_header_version minor 00:06:11.389 18:27:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # cut -f2 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.389 18:27:24 version -- app/version.sh@18 -- # minor=9 00:06:11.389 18:27:24 version -- app/version.sh@19 -- # get_header_version patch 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # cut -f2 00:06:11.389 18:27:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.389 18:27:24 version -- app/version.sh@19 -- # patch=0 00:06:11.389 18:27:24 version -- app/version.sh@20 -- # get_header_version suffix 00:06:11.389 18:27:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # tr -d '"' 00:06:11.389 18:27:24 version -- app/version.sh@14 -- # cut -f2 00:06:11.389 18:27:24 version -- app/version.sh@20 -- # suffix=-pre 00:06:11.389 18:27:24 version -- app/version.sh@22 -- # version=24.9 00:06:11.389 18:27:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:11.389 18:27:24 version -- app/version.sh@28 -- # version=24.9rc0 00:06:11.389 18:27:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:11.389 18:27:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:11.389 18:27:24 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:11.389 18:27:24 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:11.389 00:06:11.389 real 0m0.144s 00:06:11.389 user 0m0.075s 00:06:11.389 sys 0m0.102s 00:06:11.389 ************************************ 00:06:11.389 END TEST version 00:06:11.389 ************************************ 00:06:11.389 18:27:24 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.389 18:27:24 version -- common/autotest_common.sh@10 -- # set +x 00:06:11.389 18:27:24 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:11.389 18:27:24 -- spdk/autotest.sh@198 -- # uname -s 00:06:11.390 18:27:24 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:11.390 18:27:24 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:11.390 18:27:24 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:11.390 18:27:24 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:11.390 18:27:24 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:11.390 18:27:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.390 18:27:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.390 18:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.390 ************************************ 00:06:11.390 START TEST spdk_dd 00:06:11.390 ************************************ 00:06:11.390 18:27:24 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:11.390 * Looking for test storage... 00:06:11.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:11.390 18:27:24 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:11.390 18:27:24 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.390 18:27:24 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.390 18:27:24 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.390 18:27:24 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.390 18:27:24 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.390 18:27:24 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.390 18:27:24 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:11.390 18:27:24 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.390 18:27:24 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:11.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:11.958 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:11.958 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:11.958 18:27:25 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:11.958 18:27:25 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:11.958 18:27:25 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:11.958 18:27:25 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:11.958 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:11.959 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:11.960 * spdk_dd linked to liburing 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:11.960 18:27:25 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:11.960 18:27:25 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:11.960 18:27:25 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:11.960 18:27:25 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:11.960 18:27:25 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:11.960 18:27:25 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.960 18:27:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:11.960 ************************************ 00:06:11.960 START TEST spdk_dd_basic_rw 00:06:11.960 ************************************ 00:06:11.960 18:27:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:11.960 * Looking for test storage... 00:06:12.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:12.222 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:12.223 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:12.223 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:12.224 ************************************ 00:06:12.224 START TEST dd_bs_lt_native_bs 00:06:12.224 ************************************ 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:12.224 18:27:25 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:12.482 { 00:06:12.482 "subsystems": [ 00:06:12.482 { 00:06:12.482 "subsystem": "bdev", 00:06:12.482 "config": [ 00:06:12.482 { 00:06:12.482 "params": { 00:06:12.482 "trtype": "pcie", 00:06:12.482 "traddr": "0000:00:10.0", 00:06:12.482 "name": "Nvme0" 00:06:12.482 }, 00:06:12.482 "method": "bdev_nvme_attach_controller" 00:06:12.482 }, 00:06:12.482 { 00:06:12.482 "method": "bdev_wait_for_examine" 00:06:12.482 } 00:06:12.482 ] 00:06:12.482 } 00:06:12.482 ] 00:06:12.482 } 00:06:12.482 [2024-05-16 18:27:25.753381] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:12.482 [2024-05-16 18:27:25.753505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62431 ] 00:06:12.482 [2024-05-16 18:27:25.891945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.743 [2024-05-16 18:27:26.015055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.743 [2024-05-16 18:27:26.072757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:12.743 [2024-05-16 18:27:26.178398] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:12.743 [2024-05-16 18:27:26.178509] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.004 [2024-05-16 18:27:26.317055] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:13.004 00:06:13.004 real 0m0.726s 00:06:13.004 user 0m0.508s 00:06:13.004 sys 0m0.166s 00:06:13.004 ************************************ 00:06:13.004 END TEST dd_bs_lt_native_bs 00:06:13.004 ************************************ 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.004 ************************************ 00:06:13.004 START TEST dd_rw 00:06:13.004 ************************************ 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:13.004 18:27:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.940 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:13.940 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:13.940 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:13.940 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:13.940 [2024-05-16 18:27:27.182188] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:13.940 [2024-05-16 18:27:27.182292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62466 ] 00:06:13.940 { 00:06:13.940 "subsystems": [ 00:06:13.940 { 00:06:13.940 "subsystem": "bdev", 00:06:13.940 "config": [ 00:06:13.940 { 00:06:13.940 "params": { 00:06:13.940 "trtype": "pcie", 00:06:13.940 "traddr": "0000:00:10.0", 00:06:13.940 "name": "Nvme0" 00:06:13.940 }, 00:06:13.940 "method": "bdev_nvme_attach_controller" 00:06:13.940 }, 00:06:13.940 { 00:06:13.940 "method": "bdev_wait_for_examine" 00:06:13.940 } 00:06:13.940 ] 00:06:13.940 } 00:06:13.940 ] 00:06:13.940 } 00:06:13.940 [2024-05-16 18:27:27.320574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.199 [2024-05-16 18:27:27.443424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.199 [2024-05-16 18:27:27.498033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:14.459  Copying: 60/60 [kB] (average 29 MBps) 00:06:14.459 00:06:14.459 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:14.459 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:14.459 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:14.459 18:27:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:14.459 [2024-05-16 18:27:27.882135] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:14.459 [2024-05-16 18:27:27.882261] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62481 ] 00:06:14.459 { 00:06:14.459 "subsystems": [ 00:06:14.459 { 00:06:14.459 "subsystem": "bdev", 00:06:14.459 "config": [ 00:06:14.459 { 00:06:14.459 "params": { 00:06:14.459 "trtype": "pcie", 00:06:14.459 "traddr": "0000:00:10.0", 00:06:14.459 "name": "Nvme0" 00:06:14.459 }, 00:06:14.459 "method": "bdev_nvme_attach_controller" 00:06:14.459 }, 00:06:14.459 { 00:06:14.459 "method": "bdev_wait_for_examine" 00:06:14.459 } 00:06:14.459 ] 00:06:14.459 } 00:06:14.459 ] 00:06:14.459 } 00:06:14.718 [2024-05-16 18:27:28.022158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.718 [2024-05-16 18:27:28.131759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.718 [2024-05-16 18:27:28.184258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.251  Copying: 60/60 [kB] (average 19 MBps) 00:06:15.251 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:15.251 18:27:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:15.251 [2024-05-16 18:27:28.575393] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:15.251 [2024-05-16 18:27:28.575780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62502 ] 00:06:15.251 { 00:06:15.251 "subsystems": [ 00:06:15.251 { 00:06:15.251 "subsystem": "bdev", 00:06:15.251 "config": [ 00:06:15.251 { 00:06:15.251 "params": { 00:06:15.251 "trtype": "pcie", 00:06:15.251 "traddr": "0000:00:10.0", 00:06:15.251 "name": "Nvme0" 00:06:15.251 }, 00:06:15.251 "method": "bdev_nvme_attach_controller" 00:06:15.251 }, 00:06:15.251 { 00:06:15.251 "method": "bdev_wait_for_examine" 00:06:15.251 } 00:06:15.251 ] 00:06:15.251 } 00:06:15.251 ] 00:06:15.251 } 00:06:15.251 [2024-05-16 18:27:28.716721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.510 [2024-05-16 18:27:28.845791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.510 [2024-05-16 18:27:28.904802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:15.769  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:15.769 00:06:15.769 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:15.769 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:15.769 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:15.769 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:15.769 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:15.769 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:15.769 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.710 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:16.710 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:16.710 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:16.710 18:27:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:16.710 [2024-05-16 18:27:29.899816] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:16.710 [2024-05-16 18:27:29.900159] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62521 ] 00:06:16.710 { 00:06:16.710 "subsystems": [ 00:06:16.710 { 00:06:16.710 "subsystem": "bdev", 00:06:16.710 "config": [ 00:06:16.710 { 00:06:16.710 "params": { 00:06:16.710 "trtype": "pcie", 00:06:16.710 "traddr": "0000:00:10.0", 00:06:16.710 "name": "Nvme0" 00:06:16.710 }, 00:06:16.710 "method": "bdev_nvme_attach_controller" 00:06:16.710 }, 00:06:16.710 { 00:06:16.710 "method": "bdev_wait_for_examine" 00:06:16.710 } 00:06:16.710 ] 00:06:16.710 } 00:06:16.710 ] 00:06:16.710 } 00:06:16.710 [2024-05-16 18:27:30.039845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.710 [2024-05-16 18:27:30.158865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.969 [2024-05-16 18:27:30.216283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:17.228  Copying: 60/60 [kB] (average 58 MBps) 00:06:17.228 00:06:17.228 18:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:17.228 18:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:17.228 18:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:17.228 18:27:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:17.228 [2024-05-16 18:27:30.625693] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:17.228 [2024-05-16 18:27:30.625893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62540 ] 00:06:17.228 { 00:06:17.228 "subsystems": [ 00:06:17.228 { 00:06:17.228 "subsystem": "bdev", 00:06:17.228 "config": [ 00:06:17.228 { 00:06:17.228 "params": { 00:06:17.228 "trtype": "pcie", 00:06:17.228 "traddr": "0000:00:10.0", 00:06:17.228 "name": "Nvme0" 00:06:17.228 }, 00:06:17.228 "method": "bdev_nvme_attach_controller" 00:06:17.228 }, 00:06:17.228 { 00:06:17.228 "method": "bdev_wait_for_examine" 00:06:17.228 } 00:06:17.228 ] 00:06:17.228 } 00:06:17.228 ] 00:06:17.228 } 00:06:17.487 [2024-05-16 18:27:30.766063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.487 [2024-05-16 18:27:30.887004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.487 [2024-05-16 18:27:30.941858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.005  Copying: 60/60 [kB] (average 58 MBps) 00:06:18.005 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:18.005 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:18.005 [2024-05-16 18:27:31.338200] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:18.005 [2024-05-16 18:27:31.338291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62550 ] 00:06:18.005 { 00:06:18.005 "subsystems": [ 00:06:18.005 { 00:06:18.005 "subsystem": "bdev", 00:06:18.005 "config": [ 00:06:18.005 { 00:06:18.005 "params": { 00:06:18.005 "trtype": "pcie", 00:06:18.005 "traddr": "0000:00:10.0", 00:06:18.005 "name": "Nvme0" 00:06:18.005 }, 00:06:18.005 "method": "bdev_nvme_attach_controller" 00:06:18.005 }, 00:06:18.005 { 00:06:18.005 "method": "bdev_wait_for_examine" 00:06:18.005 } 00:06:18.005 ] 00:06:18.005 } 00:06:18.005 ] 00:06:18.005 } 00:06:18.005 [2024-05-16 18:27:31.477239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.300 [2024-05-16 18:27:31.593951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.300 [2024-05-16 18:27:31.650307] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:18.558  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:18.558 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:18.558 18:27:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.125 18:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:19.125 18:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:19.125 18:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.125 18:27:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.385 { 00:06:19.385 "subsystems": [ 00:06:19.385 { 00:06:19.385 "subsystem": "bdev", 00:06:19.385 "config": [ 00:06:19.385 { 00:06:19.385 "params": { 00:06:19.385 "trtype": "pcie", 00:06:19.385 "traddr": "0000:00:10.0", 00:06:19.385 "name": "Nvme0" 00:06:19.385 }, 00:06:19.385 "method": "bdev_nvme_attach_controller" 00:06:19.385 }, 00:06:19.385 { 00:06:19.385 "method": "bdev_wait_for_examine" 00:06:19.385 } 00:06:19.385 ] 00:06:19.385 } 00:06:19.385 ] 00:06:19.385 } 00:06:19.385 [2024-05-16 18:27:32.674972] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:19.385 [2024-05-16 18:27:32.675309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62580 ] 00:06:19.385 [2024-05-16 18:27:32.818585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.644 [2024-05-16 18:27:32.939526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.644 [2024-05-16 18:27:32.996341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:19.903  Copying: 56/56 [kB] (average 27 MBps) 00:06:19.903 00:06:19.903 18:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:19.903 18:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:19.903 18:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:19.903 18:27:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:19.903 { 00:06:19.903 "subsystems": [ 00:06:19.903 { 00:06:19.903 "subsystem": "bdev", 00:06:19.903 "config": [ 00:06:19.903 { 00:06:19.903 "params": { 00:06:19.903 "trtype": "pcie", 00:06:19.903 "traddr": "0000:00:10.0", 00:06:19.903 "name": "Nvme0" 00:06:19.903 }, 00:06:19.903 "method": "bdev_nvme_attach_controller" 00:06:19.903 }, 00:06:19.903 { 00:06:19.903 "method": "bdev_wait_for_examine" 00:06:19.903 } 00:06:19.903 ] 00:06:19.903 } 00:06:19.903 ] 00:06:19.903 } 00:06:19.903 [2024-05-16 18:27:33.377936] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:19.904 [2024-05-16 18:27:33.378075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62588 ] 00:06:20.163 [2024-05-16 18:27:33.527616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.163 [2024-05-16 18:27:33.641593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.422 [2024-05-16 18:27:33.699558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:20.681  Copying: 56/56 [kB] (average 54 MBps) 00:06:20.681 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:20.681 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:20.681 [2024-05-16 18:27:34.107954] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:20.681 [2024-05-16 18:27:34.108066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62609 ] 00:06:20.681 { 00:06:20.681 "subsystems": [ 00:06:20.681 { 00:06:20.681 "subsystem": "bdev", 00:06:20.681 "config": [ 00:06:20.681 { 00:06:20.681 "params": { 00:06:20.681 "trtype": "pcie", 00:06:20.681 "traddr": "0000:00:10.0", 00:06:20.681 "name": "Nvme0" 00:06:20.681 }, 00:06:20.681 "method": "bdev_nvme_attach_controller" 00:06:20.681 }, 00:06:20.681 { 00:06:20.681 "method": "bdev_wait_for_examine" 00:06:20.681 } 00:06:20.681 ] 00:06:20.681 } 00:06:20.681 ] 00:06:20.681 } 00:06:20.941 [2024-05-16 18:27:34.247973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.941 [2024-05-16 18:27:34.367819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.941 [2024-05-16 18:27:34.425150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:21.459  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:21.459 00:06:21.459 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:21.459 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:21.459 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:21.459 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:21.459 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:21.459 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:21.459 18:27:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.027 18:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:22.027 18:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:22.027 18:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.027 18:27:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.027 [2024-05-16 18:27:35.383197] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:22.027 [2024-05-16 18:27:35.383541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62628 ] 00:06:22.027 { 00:06:22.027 "subsystems": [ 00:06:22.027 { 00:06:22.027 "subsystem": "bdev", 00:06:22.027 "config": [ 00:06:22.027 { 00:06:22.027 "params": { 00:06:22.027 "trtype": "pcie", 00:06:22.027 "traddr": "0000:00:10.0", 00:06:22.027 "name": "Nvme0" 00:06:22.027 }, 00:06:22.027 "method": "bdev_nvme_attach_controller" 00:06:22.027 }, 00:06:22.027 { 00:06:22.027 "method": "bdev_wait_for_examine" 00:06:22.027 } 00:06:22.027 ] 00:06:22.027 } 00:06:22.027 ] 00:06:22.027 } 00:06:22.027 [2024-05-16 18:27:35.523575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.287 [2024-05-16 18:27:35.639353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.287 [2024-05-16 18:27:35.691968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:22.546  Copying: 56/56 [kB] (average 54 MBps) 00:06:22.546 00:06:22.546 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:22.546 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:22.546 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:22.546 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:22.805 [2024-05-16 18:27:36.066786] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:22.805 [2024-05-16 18:27:36.066925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62647 ] 00:06:22.805 { 00:06:22.805 "subsystems": [ 00:06:22.805 { 00:06:22.805 "subsystem": "bdev", 00:06:22.805 "config": [ 00:06:22.805 { 00:06:22.805 "params": { 00:06:22.805 "trtype": "pcie", 00:06:22.805 "traddr": "0000:00:10.0", 00:06:22.805 "name": "Nvme0" 00:06:22.805 }, 00:06:22.805 "method": "bdev_nvme_attach_controller" 00:06:22.805 }, 00:06:22.805 { 00:06:22.805 "method": "bdev_wait_for_examine" 00:06:22.805 } 00:06:22.805 ] 00:06:22.805 } 00:06:22.805 ] 00:06:22.805 } 00:06:22.805 [2024-05-16 18:27:36.204744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.063 [2024-05-16 18:27:36.316760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.063 [2024-05-16 18:27:36.369433] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:23.322  Copying: 56/56 [kB] (average 54 MBps) 00:06:23.322 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:23.322 18:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:23.322 [2024-05-16 18:27:36.735326] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:23.322 [2024-05-16 18:27:36.735438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62668 ] 00:06:23.322 { 00:06:23.322 "subsystems": [ 00:06:23.322 { 00:06:23.322 "subsystem": "bdev", 00:06:23.322 "config": [ 00:06:23.322 { 00:06:23.322 "params": { 00:06:23.322 "trtype": "pcie", 00:06:23.322 "traddr": "0000:00:10.0", 00:06:23.322 "name": "Nvme0" 00:06:23.322 }, 00:06:23.322 "method": "bdev_nvme_attach_controller" 00:06:23.322 }, 00:06:23.322 { 00:06:23.322 "method": "bdev_wait_for_examine" 00:06:23.322 } 00:06:23.322 ] 00:06:23.322 } 00:06:23.322 ] 00:06:23.322 } 00:06:23.579 [2024-05-16 18:27:36.865884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.579 [2024-05-16 18:27:36.994259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.579 [2024-05-16 18:27:37.047674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:24.094  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:24.094 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:24.094 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.662 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:24.662 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:24.662 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:24.662 18:27:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:24.662 { 00:06:24.662 "subsystems": [ 00:06:24.662 { 00:06:24.662 "subsystem": "bdev", 00:06:24.662 "config": [ 00:06:24.662 { 00:06:24.662 "params": { 00:06:24.662 "trtype": "pcie", 00:06:24.662 "traddr": "0000:00:10.0", 00:06:24.662 "name": "Nvme0" 00:06:24.662 }, 00:06:24.662 "method": "bdev_nvme_attach_controller" 00:06:24.662 }, 00:06:24.662 { 00:06:24.662 "method": "bdev_wait_for_examine" 00:06:24.662 } 00:06:24.662 ] 00:06:24.662 } 00:06:24.662 ] 00:06:24.662 } 00:06:24.663 [2024-05-16 18:27:38.008865] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:24.663 [2024-05-16 18:27:38.008971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:06:24.663 [2024-05-16 18:27:38.146917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.956 [2024-05-16 18:27:38.304938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.956 [2024-05-16 18:27:38.380917] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:25.523  Copying: 48/48 [kB] (average 46 MBps) 00:06:25.524 00:06:25.524 18:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:25.524 18:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:25.524 18:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:25.524 18:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:25.524 { 00:06:25.524 "subsystems": [ 00:06:25.524 { 00:06:25.524 "subsystem": "bdev", 00:06:25.524 "config": [ 00:06:25.524 { 00:06:25.524 "params": { 00:06:25.524 "trtype": "pcie", 00:06:25.524 "traddr": "0000:00:10.0", 00:06:25.524 "name": "Nvme0" 00:06:25.524 }, 00:06:25.524 "method": "bdev_nvme_attach_controller" 00:06:25.524 }, 00:06:25.524 { 00:06:25.524 "method": "bdev_wait_for_examine" 00:06:25.524 } 00:06:25.524 ] 00:06:25.524 } 00:06:25.524 ] 00:06:25.524 } 00:06:25.524 [2024-05-16 18:27:38.849836] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:25.524 [2024-05-16 18:27:38.849948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62706 ] 00:06:25.524 [2024-05-16 18:27:38.984805] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.782 [2024-05-16 18:27:39.132601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.782 [2024-05-16 18:27:39.205015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.299  Copying: 48/48 [kB] (average 23 MBps) 00:06:26.299 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:26.299 18:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:26.299 [2024-05-16 18:27:39.621668] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:26.299 [2024-05-16 18:27:39.622014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62716 ] 00:06:26.299 { 00:06:26.299 "subsystems": [ 00:06:26.299 { 00:06:26.299 "subsystem": "bdev", 00:06:26.299 "config": [ 00:06:26.299 { 00:06:26.299 "params": { 00:06:26.299 "trtype": "pcie", 00:06:26.299 "traddr": "0000:00:10.0", 00:06:26.299 "name": "Nvme0" 00:06:26.299 }, 00:06:26.299 "method": "bdev_nvme_attach_controller" 00:06:26.299 }, 00:06:26.299 { 00:06:26.299 "method": "bdev_wait_for_examine" 00:06:26.299 } 00:06:26.299 ] 00:06:26.299 } 00:06:26.299 ] 00:06:26.299 } 00:06:26.299 [2024-05-16 18:27:39.765665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.557 [2024-05-16 18:27:39.893598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.557 [2024-05-16 18:27:39.951772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.815  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:26.815 00:06:26.815 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:26.815 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:26.815 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:26.815 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:26.815 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:26.815 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:26.815 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.381 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:27.382 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:27.382 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:27.382 18:27:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:27.640 [2024-05-16 18:27:40.898887] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:27.640 [2024-05-16 18:27:40.898974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62742 ] 00:06:27.640 { 00:06:27.640 "subsystems": [ 00:06:27.640 { 00:06:27.640 "subsystem": "bdev", 00:06:27.640 "config": [ 00:06:27.640 { 00:06:27.640 "params": { 00:06:27.640 "trtype": "pcie", 00:06:27.640 "traddr": "0000:00:10.0", 00:06:27.640 "name": "Nvme0" 00:06:27.640 }, 00:06:27.640 "method": "bdev_nvme_attach_controller" 00:06:27.640 }, 00:06:27.640 { 00:06:27.640 "method": "bdev_wait_for_examine" 00:06:27.640 } 00:06:27.640 ] 00:06:27.640 } 00:06:27.640 ] 00:06:27.640 } 00:06:27.640 [2024-05-16 18:27:41.036129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.899 [2024-05-16 18:27:41.166760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.899 [2024-05-16 18:27:41.224799] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.158  Copying: 48/48 [kB] (average 46 MBps) 00:06:28.158 00:06:28.158 18:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:28.158 18:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:28.158 18:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.158 18:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.158 { 00:06:28.158 "subsystems": [ 00:06:28.158 { 00:06:28.158 "subsystem": "bdev", 00:06:28.158 "config": [ 00:06:28.158 { 00:06:28.158 "params": { 00:06:28.158 "trtype": "pcie", 00:06:28.158 "traddr": "0000:00:10.0", 00:06:28.158 "name": "Nvme0" 00:06:28.158 }, 00:06:28.158 "method": "bdev_nvme_attach_controller" 00:06:28.158 }, 00:06:28.158 { 00:06:28.158 "method": "bdev_wait_for_examine" 00:06:28.158 } 00:06:28.158 ] 00:06:28.158 } 00:06:28.158 ] 00:06:28.158 } 00:06:28.158 [2024-05-16 18:27:41.656744] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:28.158 [2024-05-16 18:27:41.656966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62754 ] 00:06:28.416 [2024-05-16 18:27:41.804597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.674 [2024-05-16 18:27:41.921659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.674 [2024-05-16 18:27:41.979415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.933  Copying: 48/48 [kB] (average 46 MBps) 00:06:28.933 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:28.933 18:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:28.933 [2024-05-16 18:27:42.371626] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:28.933 [2024-05-16 18:27:42.371730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62775 ] 00:06:28.933 { 00:06:28.933 "subsystems": [ 00:06:28.933 { 00:06:28.933 "subsystem": "bdev", 00:06:28.933 "config": [ 00:06:28.933 { 00:06:28.933 "params": { 00:06:28.933 "trtype": "pcie", 00:06:28.933 "traddr": "0000:00:10.0", 00:06:28.933 "name": "Nvme0" 00:06:28.933 }, 00:06:28.933 "method": "bdev_nvme_attach_controller" 00:06:28.933 }, 00:06:28.933 { 00:06:28.933 "method": "bdev_wait_for_examine" 00:06:28.933 } 00:06:28.933 ] 00:06:28.933 } 00:06:28.933 ] 00:06:28.933 } 00:06:29.192 [2024-05-16 18:27:42.514196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.192 [2024-05-16 18:27:42.634110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.192 [2024-05-16 18:27:42.690089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:29.708  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:29.709 00:06:29.709 00:06:29.709 real 0m16.555s 00:06:29.709 user 0m12.357s 00:06:29.709 sys 0m5.798s 00:06:29.709 ************************************ 00:06:29.709 END TEST dd_rw 00:06:29.709 ************************************ 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:29.709 ************************************ 00:06:29.709 START TEST dd_rw_offset 00:06:29.709 ************************************ 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=4u4ud6yzhys47n9acp7xkv1gpr6bx9c88rsomgz0rxoozziwufbnf4fu6sfhkgen0zxn42ufv06gfsl3s3pjdfnijo6h4bsveoghf9ruhs8zb72o62mcs5v2ocxy0s2jjlf0dgg1s9wzo78ockrmtxfa5zixdpsf84s8eshmodlw99rksr6ohkhi8pt2v6e8g9hyhnpg4ejy57rp3m8xhxhbrnnk3ucoed7ulu63ol7uneixl26rq63ggwxjip8yh31okn6leequjxuhdeq2dywr47wjaaq6kjv39scga8lwaek0wgfa7xolrnlj4r7c1jp1cklab7v2qsyn81i24ilvudtf0n26usrdpcu8l3tw7nq4x6xvyavg3z34jk87hme1fr9tmgqsv9h3hmi115z18vq9ksg0a0hu2n2mfzmc3a23fte4w8dzoe96e2bpeuo44ic28c4z9k2p5oyufdv9z2dp7fuqlm61fuhbpjpfk87x3c31e0dp55jjblerfv43frn1iog0b6vu1qhqyfz1mjeidid4uje5uut2i279amdatdp2l8f5o2od00wzd5kph3567x56eh7kay2gfvwf11b8ajqhlny0nlcpgnykttpixlaoy3qiomjv1hhm1fmytw5jprqkqhwc69vh6iuefxhjnbuldmfduyickc3sdxgpaa57s3cai6ikvpzsopdkjkw4bl67mks3spmxxrhgei9xipbvj14ksbkl09pfw258t95cl9bxz6grsam75i4caz24snoqzspxvdgkcet72c9c4rgtc6w98c5d8cm41r8obl8kbie08if12aikq4jni91abmpqxklcfmfamz4vuq78ayek2oh40k4fp0lvv2mjgco4gekk1l9iwe1qydt3lslfzuv62k99p7brpv7lxcfw9bqc37mpmax148qjxkr4wrvagga41sr7tr7slm9zlr901735h2y9ql59h2oaokobarppmvxctl1b7ocz1eb43z7ox8xafc8ad6ddqaydfyrj85ullspo4rbat71tqyxn3qo4vbhyzjhzrbfp95phgzhihd82n61xvy28b8udk1viilpbn5algsfhjapkwicrk9p9p0qnyxmoly93u4gjr4hxy20abmtz7gotnovbgwirpyjxgub94i0pbdfgin8cb9kb6xy418kjq8e9zty42y3ssc5kzgfytmatr70yrfzt35yu4yxh0pzbxffu4rf6k80f6am64ih8i5ngkvm4dm0j9komxelk9c83ud9ssfhzlqg9r3sm8indkipusj9yv65dsldci58tdg54slg8yvnkvuysrr1mft9lvt6roc12e7akm6ruj6e1jo0mogi0e2jz6eo4ht9yo3xhpn595avjhf0x2lwuwdq19i7muj358wsi5kuw3d2em85hudatzih0hlwd3poxc592fdb13te9lyjixq3d08951kyqaqnoewd7w1o0ls9xrw2guownf43oddvgi2whzmrriktgk8g1c78yxkh7tjnz4lwpjj3dmx7nqh5inlem1evrblgkubcvb6i9nn1mnmufv4flmdjk25wppi94kzqcsfy87jcock5kbw7ftcw8o4r9vflo72kq2700ef6fz684tw03tnrvj7k820sysiqoguaai8x3t15w64oaga0z264vegsv2pll1sjbz2v0cc7bzb2j6hszkzagh9q5kt1y0hkuig1iatim5clxnss5f3lvyipn122h42xmea9xaoygjcpnawgd7qpur7n2nzbirxrmdfa4ohertyod7bxirj35mttqhga3wh9uy08w2134a1np74gsn3pqyfmz6zjt8n4psle0nc626pcnzxg3ab1awnmvpk8y99r4r7gynq48u67r6tmbwd9vntkif76zb0w7mw193qhm78v70ny6udeo5a1xk7zs9y48kuanv9wqgr9k7kmiw5cjdb3yjj0ngan31v1y38et4n5urjcedonk623efnbu085hamz1tao8vxyblcp95rn9xc1su1zwa4rq68zundwopq1g0l13ddvl5neie1233w3u34jwb404w6tiz1hr8rv6o790nt2wf85w5fezecfeo9ikynlw1gyzqa87ngm3rhwtsrx3l4jt2abdqt8drnd3v481hd0xq42wgzx847rzqt44hsu470agfoqewf4180wk2ftmsf7rk37id6evozdbpydoy2mbkmkijskpdy4uxa25rqv2r4bbioqk778bfzns334aai4lk0asj7kgxpvonk6oemc008wnsju2kli7ojvxqopks194uugusix3ckoedhmvjoesiwa7ew7t5dapmb09pq54he8nsdp66g6ie4jd0yolm2dqqbbxfyic45jwlplpgnsffpilnccgr4zkmadq3as6eglj9bsn2p8feiq1lxol5bqnhnmy6cci04t6ciolmpu78bd8eroik8pugy4fbfm24b7ua5vkv65inhm1e7m1nh0g9fy4f1gt0dlp4eb00l5gugoqost8zg6xpge0d406wtr956d5izn14wk9is3x22mm50hh478awazkwh9pq6damkf6wq8rfzki02mgdqmxb2uow9ajlzy38vfi0mmfd3rn3b77qifpqozsugpb4ihtkcsoexi729lggaiau4awfdv54z9m97o4crntdxkiiwblyz6wvycogjqz98klbl4an0aybzph9wkmvutfnycmpfw0m0m87ygbm0eoebyz0oxqardguutnj9rlad9d1sdo9gydccox8bt3m8wxc48qh6x11d14lf5jkc709wpwrk20yprzg30x3n6cv1cz8y0mzzux4215ev1573tdz4hxpihsirfmpklyv0e3uym4127klu80i8ixzl0hhn76iyzcjy0utwufdl7wswbm57ai16w1x3x8r2ch7gw77045ystnnmlp2kdkuzu9t5ixwov1luuisrvgba2a7684htrv0xj939bl4z3rtjoy7iqe2u5vf0y4xs8cu683bzc1qk19ycuqytr0x9aqeqawol6r90788way0ediufqsq9driv6nc3clzq33w5o85rbjbjlpn4p5k65s103whh475ok00wque6y9i1t3grrjq8vw9o67mkfri05twvf3yu0f1kx7ggsw0pfrrm1dnvk26xfuywppbt2lopehd23dpq75hbkg36du7920d9bcmezea1wb882k2xl4yhj23rr9xl1umdmg1utzl9p8tki9px98kdf5nktrb86em43mcn5ln9tosr0xknxulso5qp6fos46s188vdncrvdmb65y8x9ozo81kl6bpjz1n9xb5vy68ydzvdmdt2kih3k7r8e9x8g0z1sjbfck8jwfeo2wehub3jpmu1j644gnei44obkpv6c5t8zgcrayem7r4qfgtjoywhme12cm0qt31ezchj0juw09fj2z5j4xih0zyqy6ci52otila67j3wmdintf3xxgk2x9zocwxshztifut9bq0kw6gs0wmnb7ee38d69hw8p0fut92y1b8cfjq0muluv7yqg532y9r6ukydm0p0rzcc7j21r4ikx4aaks2lbyghanw41axk52llu04763rrfj5o5883dihg2dbb963kvhq1tpfoqxrnb5sxliloihqr5d54w34dm1l6pmahtmk821pwx4se41xon791x5q0hfguukb3a0z6l4xywtwa0wqns537uep9n4v8uhsuhqvi8tln5zknwzcqybwa18wduwc7syqfjwl3ueggbvjapjcef0k3auc580rvh8u91m33zy1atl1dzutg32s011joelc7bzp457bjejsh646y9848jp7eottldvfg0r9ph4b419ulns7iaflu6cf3j1i44s7plxzyc1xsuok8sr44jz4r309mdwmo2lzgsk0t1m0nan0dxh2q62uqvltn2mi83at4mtkq71kyt6my6px8pj3bkop6dob4x58fb85rcfp9glighvio0ykfsu4515evxspt0rsvayqdpkuzwzlxunilaoym3i7j3kpibgpbpoll7w 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:29.709 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:29.709 [2024-05-16 18:27:43.182525] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:29.709 [2024-05-16 18:27:43.182631] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62811 ] 00:06:29.709 { 00:06:29.709 "subsystems": [ 00:06:29.709 { 00:06:29.709 "subsystem": "bdev", 00:06:29.709 "config": [ 00:06:29.709 { 00:06:29.709 "params": { 00:06:29.709 "trtype": "pcie", 00:06:29.709 "traddr": "0000:00:10.0", 00:06:29.709 "name": "Nvme0" 00:06:29.709 }, 00:06:29.709 "method": "bdev_nvme_attach_controller" 00:06:29.709 }, 00:06:29.709 { 00:06:29.709 "method": "bdev_wait_for_examine" 00:06:29.709 } 00:06:29.709 ] 00:06:29.709 } 00:06:29.709 ] 00:06:29.709 } 00:06:29.976 [2024-05-16 18:27:43.319758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.976 [2024-05-16 18:27:43.431755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.237 [2024-05-16 18:27:43.489533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.497  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:30.497 00:06:30.497 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:30.497 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:30.497 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:30.497 18:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:30.497 [2024-05-16 18:27:43.863580] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:30.497 [2024-05-16 18:27:43.863689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62825 ] 00:06:30.497 { 00:06:30.497 "subsystems": [ 00:06:30.497 { 00:06:30.497 "subsystem": "bdev", 00:06:30.497 "config": [ 00:06:30.498 { 00:06:30.498 "params": { 00:06:30.498 "trtype": "pcie", 00:06:30.498 "traddr": "0000:00:10.0", 00:06:30.498 "name": "Nvme0" 00:06:30.498 }, 00:06:30.498 "method": "bdev_nvme_attach_controller" 00:06:30.498 }, 00:06:30.498 { 00:06:30.498 "method": "bdev_wait_for_examine" 00:06:30.498 } 00:06:30.498 ] 00:06:30.498 } 00:06:30.498 ] 00:06:30.498 } 00:06:30.498 [2024-05-16 18:27:43.994251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.756 [2024-05-16 18:27:44.114358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.756 [2024-05-16 18:27:44.169367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.016  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:31.016 00:06:31.016 18:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:31.017 18:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 4u4ud6yzhys47n9acp7xkv1gpr6bx9c88rsomgz0rxoozziwufbnf4fu6sfhkgen0zxn42ufv06gfsl3s3pjdfnijo6h4bsveoghf9ruhs8zb72o62mcs5v2ocxy0s2jjlf0dgg1s9wzo78ockrmtxfa5zixdpsf84s8eshmodlw99rksr6ohkhi8pt2v6e8g9hyhnpg4ejy57rp3m8xhxhbrnnk3ucoed7ulu63ol7uneixl26rq63ggwxjip8yh31okn6leequjxuhdeq2dywr47wjaaq6kjv39scga8lwaek0wgfa7xolrnlj4r7c1jp1cklab7v2qsyn81i24ilvudtf0n26usrdpcu8l3tw7nq4x6xvyavg3z34jk87hme1fr9tmgqsv9h3hmi115z18vq9ksg0a0hu2n2mfzmc3a23fte4w8dzoe96e2bpeuo44ic28c4z9k2p5oyufdv9z2dp7fuqlm61fuhbpjpfk87x3c31e0dp55jjblerfv43frn1iog0b6vu1qhqyfz1mjeidid4uje5uut2i279amdatdp2l8f5o2od00wzd5kph3567x56eh7kay2gfvwf11b8ajqhlny0nlcpgnykttpixlaoy3qiomjv1hhm1fmytw5jprqkqhwc69vh6iuefxhjnbuldmfduyickc3sdxgpaa57s3cai6ikvpzsopdkjkw4bl67mks3spmxxrhgei9xipbvj14ksbkl09pfw258t95cl9bxz6grsam75i4caz24snoqzspxvdgkcet72c9c4rgtc6w98c5d8cm41r8obl8kbie08if12aikq4jni91abmpqxklcfmfamz4vuq78ayek2oh40k4fp0lvv2mjgco4gekk1l9iwe1qydt3lslfzuv62k99p7brpv7lxcfw9bqc37mpmax148qjxkr4wrvagga41sr7tr7slm9zlr901735h2y9ql59h2oaokobarppmvxctl1b7ocz1eb43z7ox8xafc8ad6ddqaydfyrj85ullspo4rbat71tqyxn3qo4vbhyzjhzrbfp95phgzhihd82n61xvy28b8udk1viilpbn5algsfhjapkwicrk9p9p0qnyxmoly93u4gjr4hxy20abmtz7gotnovbgwirpyjxgub94i0pbdfgin8cb9kb6xy418kjq8e9zty42y3ssc5kzgfytmatr70yrfzt35yu4yxh0pzbxffu4rf6k80f6am64ih8i5ngkvm4dm0j9komxelk9c83ud9ssfhzlqg9r3sm8indkipusj9yv65dsldci58tdg54slg8yvnkvuysrr1mft9lvt6roc12e7akm6ruj6e1jo0mogi0e2jz6eo4ht9yo3xhpn595avjhf0x2lwuwdq19i7muj358wsi5kuw3d2em85hudatzih0hlwd3poxc592fdb13te9lyjixq3d08951kyqaqnoewd7w1o0ls9xrw2guownf43oddvgi2whzmrriktgk8g1c78yxkh7tjnz4lwpjj3dmx7nqh5inlem1evrblgkubcvb6i9nn1mnmufv4flmdjk25wppi94kzqcsfy87jcock5kbw7ftcw8o4r9vflo72kq2700ef6fz684tw03tnrvj7k820sysiqoguaai8x3t15w64oaga0z264vegsv2pll1sjbz2v0cc7bzb2j6hszkzagh9q5kt1y0hkuig1iatim5clxnss5f3lvyipn122h42xmea9xaoygjcpnawgd7qpur7n2nzbirxrmdfa4ohertyod7bxirj35mttqhga3wh9uy08w2134a1np74gsn3pqyfmz6zjt8n4psle0nc626pcnzxg3ab1awnmvpk8y99r4r7gynq48u67r6tmbwd9vntkif76zb0w7mw193qhm78v70ny6udeo5a1xk7zs9y48kuanv9wqgr9k7kmiw5cjdb3yjj0ngan31v1y38et4n5urjcedonk623efnbu085hamz1tao8vxyblcp95rn9xc1su1zwa4rq68zundwopq1g0l13ddvl5neie1233w3u34jwb404w6tiz1hr8rv6o790nt2wf85w5fezecfeo9ikynlw1gyzqa87ngm3rhwtsrx3l4jt2abdqt8drnd3v481hd0xq42wgzx847rzqt44hsu470agfoqewf4180wk2ftmsf7rk37id6evozdbpydoy2mbkmkijskpdy4uxa25rqv2r4bbioqk778bfzns334aai4lk0asj7kgxpvonk6oemc008wnsju2kli7ojvxqopks194uugusix3ckoedhmvjoesiwa7ew7t5dapmb09pq54he8nsdp66g6ie4jd0yolm2dqqbbxfyic45jwlplpgnsffpilnccgr4zkmadq3as6eglj9bsn2p8feiq1lxol5bqnhnmy6cci04t6ciolmpu78bd8eroik8pugy4fbfm24b7ua5vkv65inhm1e7m1nh0g9fy4f1gt0dlp4eb00l5gugoqost8zg6xpge0d406wtr956d5izn14wk9is3x22mm50hh478awazkwh9pq6damkf6wq8rfzki02mgdqmxb2uow9ajlzy38vfi0mmfd3rn3b77qifpqozsugpb4ihtkcsoexi729lggaiau4awfdv54z9m97o4crntdxkiiwblyz6wvycogjqz98klbl4an0aybzph9wkmvutfnycmpfw0m0m87ygbm0eoebyz0oxqardguutnj9rlad9d1sdo9gydccox8bt3m8wxc48qh6x11d14lf5jkc709wpwrk20yprzg30x3n6cv1cz8y0mzzux4215ev1573tdz4hxpihsirfmpklyv0e3uym4127klu80i8ixzl0hhn76iyzcjy0utwufdl7wswbm57ai16w1x3x8r2ch7gw77045ystnnmlp2kdkuzu9t5ixwov1luuisrvgba2a7684htrv0xj939bl4z3rtjoy7iqe2u5vf0y4xs8cu683bzc1qk19ycuqytr0x9aqeqawol6r90788way0ediufqsq9driv6nc3clzq33w5o85rbjbjlpn4p5k65s103whh475ok00wque6y9i1t3grrjq8vw9o67mkfri05twvf3yu0f1kx7ggsw0pfrrm1dnvk26xfuywppbt2lopehd23dpq75hbkg36du7920d9bcmezea1wb882k2xl4yhj23rr9xl1umdmg1utzl9p8tki9px98kdf5nktrb86em43mcn5ln9tosr0xknxulso5qp6fos46s188vdncrvdmb65y8x9ozo81kl6bpjz1n9xb5vy68ydzvdmdt2kih3k7r8e9x8g0z1sjbfck8jwfeo2wehub3jpmu1j644gnei44obkpv6c5t8zgcrayem7r4qfgtjoywhme12cm0qt31ezchj0juw09fj2z5j4xih0zyqy6ci52otila67j3wmdintf3xxgk2x9zocwxshztifut9bq0kw6gs0wmnb7ee38d69hw8p0fut92y1b8cfjq0muluv7yqg532y9r6ukydm0p0rzcc7j21r4ikx4aaks2lbyghanw41axk52llu04763rrfj5o5883dihg2dbb963kvhq1tpfoqxrnb5sxliloihqr5d54w34dm1l6pmahtmk821pwx4se41xon791x5q0hfguukb3a0z6l4xywtwa0wqns537uep9n4v8uhsuhqvi8tln5zknwzcqybwa18wduwc7syqfjwl3ueggbvjapjcef0k3auc580rvh8u91m33zy1atl1dzutg32s011joelc7bzp457bjejsh646y9848jp7eottldvfg0r9ph4b419ulns7iaflu6cf3j1i44s7plxzyc1xsuok8sr44jz4r309mdwmo2lzgsk0t1m0nan0dxh2q62uqvltn2mi83at4mtkq71kyt6my6px8pj3bkop6dob4x58fb85rcfp9glighvio0ykfsu4515evxspt0rsvayqdpkuzwzlxunilaoym3i7j3kpibgpbpoll7w == \4\u\4\u\d\6\y\z\h\y\s\4\7\n\9\a\c\p\7\x\k\v\1\g\p\r\6\b\x\9\c\8\8\r\s\o\m\g\z\0\r\x\o\o\z\z\i\w\u\f\b\n\f\4\f\u\6\s\f\h\k\g\e\n\0\z\x\n\4\2\u\f\v\0\6\g\f\s\l\3\s\3\p\j\d\f\n\i\j\o\6\h\4\b\s\v\e\o\g\h\f\9\r\u\h\s\8\z\b\7\2\o\6\2\m\c\s\5\v\2\o\c\x\y\0\s\2\j\j\l\f\0\d\g\g\1\s\9\w\z\o\7\8\o\c\k\r\m\t\x\f\a\5\z\i\x\d\p\s\f\8\4\s\8\e\s\h\m\o\d\l\w\9\9\r\k\s\r\6\o\h\k\h\i\8\p\t\2\v\6\e\8\g\9\h\y\h\n\p\g\4\e\j\y\5\7\r\p\3\m\8\x\h\x\h\b\r\n\n\k\3\u\c\o\e\d\7\u\l\u\6\3\o\l\7\u\n\e\i\x\l\2\6\r\q\6\3\g\g\w\x\j\i\p\8\y\h\3\1\o\k\n\6\l\e\e\q\u\j\x\u\h\d\e\q\2\d\y\w\r\4\7\w\j\a\a\q\6\k\j\v\3\9\s\c\g\a\8\l\w\a\e\k\0\w\g\f\a\7\x\o\l\r\n\l\j\4\r\7\c\1\j\p\1\c\k\l\a\b\7\v\2\q\s\y\n\8\1\i\2\4\i\l\v\u\d\t\f\0\n\2\6\u\s\r\d\p\c\u\8\l\3\t\w\7\n\q\4\x\6\x\v\y\a\v\g\3\z\3\4\j\k\8\7\h\m\e\1\f\r\9\t\m\g\q\s\v\9\h\3\h\m\i\1\1\5\z\1\8\v\q\9\k\s\g\0\a\0\h\u\2\n\2\m\f\z\m\c\3\a\2\3\f\t\e\4\w\8\d\z\o\e\9\6\e\2\b\p\e\u\o\4\4\i\c\2\8\c\4\z\9\k\2\p\5\o\y\u\f\d\v\9\z\2\d\p\7\f\u\q\l\m\6\1\f\u\h\b\p\j\p\f\k\8\7\x\3\c\3\1\e\0\d\p\5\5\j\j\b\l\e\r\f\v\4\3\f\r\n\1\i\o\g\0\b\6\v\u\1\q\h\q\y\f\z\1\m\j\e\i\d\i\d\4\u\j\e\5\u\u\t\2\i\2\7\9\a\m\d\a\t\d\p\2\l\8\f\5\o\2\o\d\0\0\w\z\d\5\k\p\h\3\5\6\7\x\5\6\e\h\7\k\a\y\2\g\f\v\w\f\1\1\b\8\a\j\q\h\l\n\y\0\n\l\c\p\g\n\y\k\t\t\p\i\x\l\a\o\y\3\q\i\o\m\j\v\1\h\h\m\1\f\m\y\t\w\5\j\p\r\q\k\q\h\w\c\6\9\v\h\6\i\u\e\f\x\h\j\n\b\u\l\d\m\f\d\u\y\i\c\k\c\3\s\d\x\g\p\a\a\5\7\s\3\c\a\i\6\i\k\v\p\z\s\o\p\d\k\j\k\w\4\b\l\6\7\m\k\s\3\s\p\m\x\x\r\h\g\e\i\9\x\i\p\b\v\j\1\4\k\s\b\k\l\0\9\p\f\w\2\5\8\t\9\5\c\l\9\b\x\z\6\g\r\s\a\m\7\5\i\4\c\a\z\2\4\s\n\o\q\z\s\p\x\v\d\g\k\c\e\t\7\2\c\9\c\4\r\g\t\c\6\w\9\8\c\5\d\8\c\m\4\1\r\8\o\b\l\8\k\b\i\e\0\8\i\f\1\2\a\i\k\q\4\j\n\i\9\1\a\b\m\p\q\x\k\l\c\f\m\f\a\m\z\4\v\u\q\7\8\a\y\e\k\2\o\h\4\0\k\4\f\p\0\l\v\v\2\m\j\g\c\o\4\g\e\k\k\1\l\9\i\w\e\1\q\y\d\t\3\l\s\l\f\z\u\v\6\2\k\9\9\p\7\b\r\p\v\7\l\x\c\f\w\9\b\q\c\3\7\m\p\m\a\x\1\4\8\q\j\x\k\r\4\w\r\v\a\g\g\a\4\1\s\r\7\t\r\7\s\l\m\9\z\l\r\9\0\1\7\3\5\h\2\y\9\q\l\5\9\h\2\o\a\o\k\o\b\a\r\p\p\m\v\x\c\t\l\1\b\7\o\c\z\1\e\b\4\3\z\7\o\x\8\x\a\f\c\8\a\d\6\d\d\q\a\y\d\f\y\r\j\8\5\u\l\l\s\p\o\4\r\b\a\t\7\1\t\q\y\x\n\3\q\o\4\v\b\h\y\z\j\h\z\r\b\f\p\9\5\p\h\g\z\h\i\h\d\8\2\n\6\1\x\v\y\2\8\b\8\u\d\k\1\v\i\i\l\p\b\n\5\a\l\g\s\f\h\j\a\p\k\w\i\c\r\k\9\p\9\p\0\q\n\y\x\m\o\l\y\9\3\u\4\g\j\r\4\h\x\y\2\0\a\b\m\t\z\7\g\o\t\n\o\v\b\g\w\i\r\p\y\j\x\g\u\b\9\4\i\0\p\b\d\f\g\i\n\8\c\b\9\k\b\6\x\y\4\1\8\k\j\q\8\e\9\z\t\y\4\2\y\3\s\s\c\5\k\z\g\f\y\t\m\a\t\r\7\0\y\r\f\z\t\3\5\y\u\4\y\x\h\0\p\z\b\x\f\f\u\4\r\f\6\k\8\0\f\6\a\m\6\4\i\h\8\i\5\n\g\k\v\m\4\d\m\0\j\9\k\o\m\x\e\l\k\9\c\8\3\u\d\9\s\s\f\h\z\l\q\g\9\r\3\s\m\8\i\n\d\k\i\p\u\s\j\9\y\v\6\5\d\s\l\d\c\i\5\8\t\d\g\5\4\s\l\g\8\y\v\n\k\v\u\y\s\r\r\1\m\f\t\9\l\v\t\6\r\o\c\1\2\e\7\a\k\m\6\r\u\j\6\e\1\j\o\0\m\o\g\i\0\e\2\j\z\6\e\o\4\h\t\9\y\o\3\x\h\p\n\5\9\5\a\v\j\h\f\0\x\2\l\w\u\w\d\q\1\9\i\7\m\u\j\3\5\8\w\s\i\5\k\u\w\3\d\2\e\m\8\5\h\u\d\a\t\z\i\h\0\h\l\w\d\3\p\o\x\c\5\9\2\f\d\b\1\3\t\e\9\l\y\j\i\x\q\3\d\0\8\9\5\1\k\y\q\a\q\n\o\e\w\d\7\w\1\o\0\l\s\9\x\r\w\2\g\u\o\w\n\f\4\3\o\d\d\v\g\i\2\w\h\z\m\r\r\i\k\t\g\k\8\g\1\c\7\8\y\x\k\h\7\t\j\n\z\4\l\w\p\j\j\3\d\m\x\7\n\q\h\5\i\n\l\e\m\1\e\v\r\b\l\g\k\u\b\c\v\b\6\i\9\n\n\1\m\n\m\u\f\v\4\f\l\m\d\j\k\2\5\w\p\p\i\9\4\k\z\q\c\s\f\y\8\7\j\c\o\c\k\5\k\b\w\7\f\t\c\w\8\o\4\r\9\v\f\l\o\7\2\k\q\2\7\0\0\e\f\6\f\z\6\8\4\t\w\0\3\t\n\r\v\j\7\k\8\2\0\s\y\s\i\q\o\g\u\a\a\i\8\x\3\t\1\5\w\6\4\o\a\g\a\0\z\2\6\4\v\e\g\s\v\2\p\l\l\1\s\j\b\z\2\v\0\c\c\7\b\z\b\2\j\6\h\s\z\k\z\a\g\h\9\q\5\k\t\1\y\0\h\k\u\i\g\1\i\a\t\i\m\5\c\l\x\n\s\s\5\f\3\l\v\y\i\p\n\1\2\2\h\4\2\x\m\e\a\9\x\a\o\y\g\j\c\p\n\a\w\g\d\7\q\p\u\r\7\n\2\n\z\b\i\r\x\r\m\d\f\a\4\o\h\e\r\t\y\o\d\7\b\x\i\r\j\3\5\m\t\t\q\h\g\a\3\w\h\9\u\y\0\8\w\2\1\3\4\a\1\n\p\7\4\g\s\n\3\p\q\y\f\m\z\6\z\j\t\8\n\4\p\s\l\e\0\n\c\6\2\6\p\c\n\z\x\g\3\a\b\1\a\w\n\m\v\p\k\8\y\9\9\r\4\r\7\g\y\n\q\4\8\u\6\7\r\6\t\m\b\w\d\9\v\n\t\k\i\f\7\6\z\b\0\w\7\m\w\1\9\3\q\h\m\7\8\v\7\0\n\y\6\u\d\e\o\5\a\1\x\k\7\z\s\9\y\4\8\k\u\a\n\v\9\w\q\g\r\9\k\7\k\m\i\w\5\c\j\d\b\3\y\j\j\0\n\g\a\n\3\1\v\1\y\3\8\e\t\4\n\5\u\r\j\c\e\d\o\n\k\6\2\3\e\f\n\b\u\0\8\5\h\a\m\z\1\t\a\o\8\v\x\y\b\l\c\p\9\5\r\n\9\x\c\1\s\u\1\z\w\a\4\r\q\6\8\z\u\n\d\w\o\p\q\1\g\0\l\1\3\d\d\v\l\5\n\e\i\e\1\2\3\3\w\3\u\3\4\j\w\b\4\0\4\w\6\t\i\z\1\h\r\8\r\v\6\o\7\9\0\n\t\2\w\f\8\5\w\5\f\e\z\e\c\f\e\o\9\i\k\y\n\l\w\1\g\y\z\q\a\8\7\n\g\m\3\r\h\w\t\s\r\x\3\l\4\j\t\2\a\b\d\q\t\8\d\r\n\d\3\v\4\8\1\h\d\0\x\q\4\2\w\g\z\x\8\4\7\r\z\q\t\4\4\h\s\u\4\7\0\a\g\f\o\q\e\w\f\4\1\8\0\w\k\2\f\t\m\s\f\7\r\k\3\7\i\d\6\e\v\o\z\d\b\p\y\d\o\y\2\m\b\k\m\k\i\j\s\k\p\d\y\4\u\x\a\2\5\r\q\v\2\r\4\b\b\i\o\q\k\7\7\8\b\f\z\n\s\3\3\4\a\a\i\4\l\k\0\a\s\j\7\k\g\x\p\v\o\n\k\6\o\e\m\c\0\0\8\w\n\s\j\u\2\k\l\i\7\o\j\v\x\q\o\p\k\s\1\9\4\u\u\g\u\s\i\x\3\c\k\o\e\d\h\m\v\j\o\e\s\i\w\a\7\e\w\7\t\5\d\a\p\m\b\0\9\p\q\5\4\h\e\8\n\s\d\p\6\6\g\6\i\e\4\j\d\0\y\o\l\m\2\d\q\q\b\b\x\f\y\i\c\4\5\j\w\l\p\l\p\g\n\s\f\f\p\i\l\n\c\c\g\r\4\z\k\m\a\d\q\3\a\s\6\e\g\l\j\9\b\s\n\2\p\8\f\e\i\q\1\l\x\o\l\5\b\q\n\h\n\m\y\6\c\c\i\0\4\t\6\c\i\o\l\m\p\u\7\8\b\d\8\e\r\o\i\k\8\p\u\g\y\4\f\b\f\m\2\4\b\7\u\a\5\v\k\v\6\5\i\n\h\m\1\e\7\m\1\n\h\0\g\9\f\y\4\f\1\g\t\0\d\l\p\4\e\b\0\0\l\5\g\u\g\o\q\o\s\t\8\z\g\6\x\p\g\e\0\d\4\0\6\w\t\r\9\5\6\d\5\i\z\n\1\4\w\k\9\i\s\3\x\2\2\m\m\5\0\h\h\4\7\8\a\w\a\z\k\w\h\9\p\q\6\d\a\m\k\f\6\w\q\8\r\f\z\k\i\0\2\m\g\d\q\m\x\b\2\u\o\w\9\a\j\l\z\y\3\8\v\f\i\0\m\m\f\d\3\r\n\3\b\7\7\q\i\f\p\q\o\z\s\u\g\p\b\4\i\h\t\k\c\s\o\e\x\i\7\2\9\l\g\g\a\i\a\u\4\a\w\f\d\v\5\4\z\9\m\9\7\o\4\c\r\n\t\d\x\k\i\i\w\b\l\y\z\6\w\v\y\c\o\g\j\q\z\9\8\k\l\b\l\4\a\n\0\a\y\b\z\p\h\9\w\k\m\v\u\t\f\n\y\c\m\p\f\w\0\m\0\m\8\7\y\g\b\m\0\e\o\e\b\y\z\0\o\x\q\a\r\d\g\u\u\t\n\j\9\r\l\a\d\9\d\1\s\d\o\9\g\y\d\c\c\o\x\8\b\t\3\m\8\w\x\c\4\8\q\h\6\x\1\1\d\1\4\l\f\5\j\k\c\7\0\9\w\p\w\r\k\2\0\y\p\r\z\g\3\0\x\3\n\6\c\v\1\c\z\8\y\0\m\z\z\u\x\4\2\1\5\e\v\1\5\7\3\t\d\z\4\h\x\p\i\h\s\i\r\f\m\p\k\l\y\v\0\e\3\u\y\m\4\1\2\7\k\l\u\8\0\i\8\i\x\z\l\0\h\h\n\7\6\i\y\z\c\j\y\0\u\t\w\u\f\d\l\7\w\s\w\b\m\5\7\a\i\1\6\w\1\x\3\x\8\r\2\c\h\7\g\w\7\7\0\4\5\y\s\t\n\n\m\l\p\2\k\d\k\u\z\u\9\t\5\i\x\w\o\v\1\l\u\u\i\s\r\v\g\b\a\2\a\7\6\8\4\h\t\r\v\0\x\j\9\3\9\b\l\4\z\3\r\t\j\o\y\7\i\q\e\2\u\5\v\f\0\y\4\x\s\8\c\u\6\8\3\b\z\c\1\q\k\1\9\y\c\u\q\y\t\r\0\x\9\a\q\e\q\a\w\o\l\6\r\9\0\7\8\8\w\a\y\0\e\d\i\u\f\q\s\q\9\d\r\i\v\6\n\c\3\c\l\z\q\3\3\w\5\o\8\5\r\b\j\b\j\l\p\n\4\p\5\k\6\5\s\1\0\3\w\h\h\4\7\5\o\k\0\0\w\q\u\e\6\y\9\i\1\t\3\g\r\r\j\q\8\v\w\9\o\6\7\m\k\f\r\i\0\5\t\w\v\f\3\y\u\0\f\1\k\x\7\g\g\s\w\0\p\f\r\r\m\1\d\n\v\k\2\6\x\f\u\y\w\p\p\b\t\2\l\o\p\e\h\d\2\3\d\p\q\7\5\h\b\k\g\3\6\d\u\7\9\2\0\d\9\b\c\m\e\z\e\a\1\w\b\8\8\2\k\2\x\l\4\y\h\j\2\3\r\r\9\x\l\1\u\m\d\m\g\1\u\t\z\l\9\p\8\t\k\i\9\p\x\9\8\k\d\f\5\n\k\t\r\b\8\6\e\m\4\3\m\c\n\5\l\n\9\t\o\s\r\0\x\k\n\x\u\l\s\o\5\q\p\6\f\o\s\4\6\s\1\8\8\v\d\n\c\r\v\d\m\b\6\5\y\8\x\9\o\z\o\8\1\k\l\6\b\p\j\z\1\n\9\x\b\5\v\y\6\8\y\d\z\v\d\m\d\t\2\k\i\h\3\k\7\r\8\e\9\x\8\g\0\z\1\s\j\b\f\c\k\8\j\w\f\e\o\2\w\e\h\u\b\3\j\p\m\u\1\j\6\4\4\g\n\e\i\4\4\o\b\k\p\v\6\c\5\t\8\z\g\c\r\a\y\e\m\7\r\4\q\f\g\t\j\o\y\w\h\m\e\1\2\c\m\0\q\t\3\1\e\z\c\h\j\0\j\u\w\0\9\f\j\2\z\5\j\4\x\i\h\0\z\y\q\y\6\c\i\5\2\o\t\i\l\a\6\7\j\3\w\m\d\i\n\t\f\3\x\x\g\k\2\x\9\z\o\c\w\x\s\h\z\t\i\f\u\t\9\b\q\0\k\w\6\g\s\0\w\m\n\b\7\e\e\3\8\d\6\9\h\w\8\p\0\f\u\t\9\2\y\1\b\8\c\f\j\q\0\m\u\l\u\v\7\y\q\g\5\3\2\y\9\r\6\u\k\y\d\m\0\p\0\r\z\c\c\7\j\2\1\r\4\i\k\x\4\a\a\k\s\2\l\b\y\g\h\a\n\w\4\1\a\x\k\5\2\l\l\u\0\4\7\6\3\r\r\f\j\5\o\5\8\8\3\d\i\h\g\2\d\b\b\9\6\3\k\v\h\q\1\t\p\f\o\q\x\r\n\b\5\s\x\l\i\l\o\i\h\q\r\5\d\5\4\w\3\4\d\m\1\l\6\p\m\a\h\t\m\k\8\2\1\p\w\x\4\s\e\4\1\x\o\n\7\9\1\x\5\q\0\h\f\g\u\u\k\b\3\a\0\z\6\l\4\x\y\w\t\w\a\0\w\q\n\s\5\3\7\u\e\p\9\n\4\v\8\u\h\s\u\h\q\v\i\8\t\l\n\5\z\k\n\w\z\c\q\y\b\w\a\1\8\w\d\u\w\c\7\s\y\q\f\j\w\l\3\u\e\g\g\b\v\j\a\p\j\c\e\f\0\k\3\a\u\c\5\8\0\r\v\h\8\u\9\1\m\3\3\z\y\1\a\t\l\1\d\z\u\t\g\3\2\s\0\1\1\j\o\e\l\c\7\b\z\p\4\5\7\b\j\e\j\s\h\6\4\6\y\9\8\4\8\j\p\7\e\o\t\t\l\d\v\f\g\0\r\9\p\h\4\b\4\1\9\u\l\n\s\7\i\a\f\l\u\6\c\f\3\j\1\i\4\4\s\7\p\l\x\z\y\c\1\x\s\u\o\k\8\s\r\4\4\j\z\4\r\3\0\9\m\d\w\m\o\2\l\z\g\s\k\0\t\1\m\0\n\a\n\0\d\x\h\2\q\6\2\u\q\v\l\t\n\2\m\i\8\3\a\t\4\m\t\k\q\7\1\k\y\t\6\m\y\6\p\x\8\p\j\3\b\k\o\p\6\d\o\b\4\x\5\8\f\b\8\5\r\c\f\p\9\g\l\i\g\h\v\i\o\0\y\k\f\s\u\4\5\1\5\e\v\x\s\p\t\0\r\s\v\a\y\q\d\p\k\u\z\w\z\l\x\u\n\i\l\a\o\y\m\3\i\7\j\3\k\p\i\b\g\p\b\p\o\l\l\7\w ]] 00:06:31.017 00:06:31.017 real 0m1.414s 00:06:31.017 user 0m1.001s 00:06:31.017 sys 0m0.600s 00:06:31.017 18:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.017 ************************************ 00:06:31.017 END TEST dd_rw_offset 00:06:31.017 ************************************ 00:06:31.017 18:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:31.277 18:27:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:31.277 [2024-05-16 18:27:44.590062] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:31.277 [2024-05-16 18:27:44.590169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62854 ] 00:06:31.277 { 00:06:31.277 "subsystems": [ 00:06:31.277 { 00:06:31.277 "subsystem": "bdev", 00:06:31.277 "config": [ 00:06:31.277 { 00:06:31.277 "params": { 00:06:31.277 "trtype": "pcie", 00:06:31.277 "traddr": "0000:00:10.0", 00:06:31.277 "name": "Nvme0" 00:06:31.277 }, 00:06:31.277 "method": "bdev_nvme_attach_controller" 00:06:31.277 }, 00:06:31.277 { 00:06:31.277 "method": "bdev_wait_for_examine" 00:06:31.277 } 00:06:31.277 ] 00:06:31.277 } 00:06:31.277 ] 00:06:31.277 } 00:06:31.277 [2024-05-16 18:27:44.726976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.536 [2024-05-16 18:27:44.857239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.536 [2024-05-16 18:27:44.917491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:31.794  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:31.794 00:06:31.794 18:27:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:31.794 ************************************ 00:06:31.794 END TEST spdk_dd_basic_rw 00:06:31.794 ************************************ 00:06:31.794 00:06:31.794 real 0m19.883s 00:06:31.794 user 0m14.518s 00:06:31.794 sys 0m7.079s 00:06:31.794 18:27:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.794 18:27:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.052 18:27:45 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:32.052 18:27:45 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.052 18:27:45 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.052 18:27:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:32.052 ************************************ 00:06:32.052 START TEST spdk_dd_posix 00:06:32.052 ************************************ 00:06:32.052 18:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:32.052 * Looking for test storage... 00:06:32.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:32.052 18:27:45 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.052 18:27:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:32.053 * First test run, liburing in use 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.053 ************************************ 00:06:32.053 START TEST dd_flag_append 00:06:32.053 ************************************ 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=32j70usdkvxv2s9h6qls4g4ufvnwoshl 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=dvdr7yhay0iev3qiz8hcx27phu87ckep 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 32j70usdkvxv2s9h6qls4g4ufvnwoshl 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s dvdr7yhay0iev3qiz8hcx27phu87ckep 00:06:32.053 18:27:45 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:32.053 [2024-05-16 18:27:45.477916] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:32.053 [2024-05-16 18:27:45.478041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62918 ] 00:06:32.311 [2024-05-16 18:27:45.620474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.311 [2024-05-16 18:27:45.744534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.311 [2024-05-16 18:27:45.802487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.568  Copying: 32/32 [B] (average 31 kBps) 00:06:32.568 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ dvdr7yhay0iev3qiz8hcx27phu87ckep32j70usdkvxv2s9h6qls4g4ufvnwoshl == \d\v\d\r\7\y\h\a\y\0\i\e\v\3\q\i\z\8\h\c\x\2\7\p\h\u\8\7\c\k\e\p\3\2\j\7\0\u\s\d\k\v\x\v\2\s\9\h\6\q\l\s\4\g\4\u\f\v\n\w\o\s\h\l ]] 00:06:32.827 00:06:32.827 real 0m0.652s 00:06:32.827 user 0m0.373s 00:06:32.827 sys 0m0.292s 00:06:32.827 ************************************ 00:06:32.827 END TEST dd_flag_append 00:06:32.827 ************************************ 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:32.827 ************************************ 00:06:32.827 START TEST dd_flag_directory 00:06:32.827 ************************************ 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.827 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.827 [2024-05-16 18:27:46.179241] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:32.827 [2024-05-16 18:27:46.179341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62947 ] 00:06:32.827 [2024-05-16 18:27:46.313951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.086 [2024-05-16 18:27:46.483017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.086 [2024-05-16 18:27:46.540932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.086 [2024-05-16 18:27:46.573537] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.086 [2024-05-16 18:27:46.573591] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.086 [2024-05-16 18:27:46.573620] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.345 [2024-05-16 18:27:46.684470] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:33.345 18:27:46 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:33.345 [2024-05-16 18:27:46.833956] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:33.345 [2024-05-16 18:27:46.834049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:06:33.605 [2024-05-16 18:27:46.970169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.605 [2024-05-16 18:27:47.084092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.864 [2024-05-16 18:27:47.137951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:33.864 [2024-05-16 18:27:47.173355] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.864 [2024-05-16 18:27:47.173408] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:33.864 [2024-05-16 18:27:47.173439] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.864 [2024-05-16 18:27:47.289017] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:34.122 ************************************ 00:06:34.122 END TEST dd_flag_directory 00:06:34.122 ************************************ 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.122 00:06:34.122 real 0m1.265s 00:06:34.122 user 0m0.765s 00:06:34.122 sys 0m0.289s 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.122 18:27:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:34.122 ************************************ 00:06:34.122 START TEST dd_flag_nofollow 00:06:34.123 ************************************ 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.123 18:27:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:34.123 [2024-05-16 18:27:47.507121] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:34.123 [2024-05-16 18:27:47.507228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62990 ] 00:06:34.381 [2024-05-16 18:27:47.645126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.381 [2024-05-16 18:27:47.774384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.381 [2024-05-16 18:27:47.832029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.381 [2024-05-16 18:27:47.868128] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:34.381 [2024-05-16 18:27:47.868211] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:34.381 [2024-05-16 18:27:47.868241] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.640 [2024-05-16 18:27:47.985476] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.640 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.641 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.641 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.641 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.641 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.641 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:34.641 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:34.641 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:34.899 [2024-05-16 18:27:48.148880] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:34.899 [2024-05-16 18:27:48.149234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63001 ] 00:06:34.899 [2024-05-16 18:27:48.294296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.157 [2024-05-16 18:27:48.425859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.157 [2024-05-16 18:27:48.484348] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.157 [2024-05-16 18:27:48.521159] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.157 [2024-05-16 18:27:48.521453] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:35.157 [2024-05-16 18:27:48.521478] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.157 [2024-05-16 18:27:48.638199] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:35.415 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:35.415 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.415 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:35.415 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.415 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:35.415 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.415 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:35.416 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:35.416 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:35.416 18:27:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.416 [2024-05-16 18:27:48.794399] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:35.416 [2024-05-16 18:27:48.794494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63008 ] 00:06:35.675 [2024-05-16 18:27:48.937781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.675 [2024-05-16 18:27:49.087429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.675 [2024-05-16 18:27:49.154972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.933  Copying: 512/512 [B] (average 500 kBps) 00:06:35.933 00:06:35.933 ************************************ 00:06:35.933 END TEST dd_flag_nofollow 00:06:35.933 ************************************ 00:06:35.933 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ omagpuid426t5fggfgoa92zt1lv500shupainbrqnaqsjo12dn0ddbrqi2rx5ouek3fdzv7mfo6i5mn8iefa3s5eks13cu8mlvuc2d22z6dd13n5b0j9g2djm71aban55i5k2310ngs93qx46t2lujdaqnu0ap012qcezdp6wmkniqdnn55ewq36xpf2ugouhmubylvdoatoxt86005ql5m0i2budek2hsvbx8y7kqpraaustihk4tajif4q1199m78dx5zdax8v3l4njf1io7g62eix2b594aje3nuw1rtlxu1w426q5hb9kuf2zaec4gbqa0h68z4g3oyoeza7dee1e516kta1ykwqqdya1j2f1yq76z7nprk4mij238rg0eq2zodnuocesszs3s2lv773lym6m0sjda7h3hp4hzud30sl3ygsted2ipxv5vcj6urq7i1jvx82v2ixt1a7bobyqlvoww9cyg3pt7bw00eucj6ozkw4exj0jdm97pwi == \o\m\a\g\p\u\i\d\4\2\6\t\5\f\g\g\f\g\o\a\9\2\z\t\1\l\v\5\0\0\s\h\u\p\a\i\n\b\r\q\n\a\q\s\j\o\1\2\d\n\0\d\d\b\r\q\i\2\r\x\5\o\u\e\k\3\f\d\z\v\7\m\f\o\6\i\5\m\n\8\i\e\f\a\3\s\5\e\k\s\1\3\c\u\8\m\l\v\u\c\2\d\2\2\z\6\d\d\1\3\n\5\b\0\j\9\g\2\d\j\m\7\1\a\b\a\n\5\5\i\5\k\2\3\1\0\n\g\s\9\3\q\x\4\6\t\2\l\u\j\d\a\q\n\u\0\a\p\0\1\2\q\c\e\z\d\p\6\w\m\k\n\i\q\d\n\n\5\5\e\w\q\3\6\x\p\f\2\u\g\o\u\h\m\u\b\y\l\v\d\o\a\t\o\x\t\8\6\0\0\5\q\l\5\m\0\i\2\b\u\d\e\k\2\h\s\v\b\x\8\y\7\k\q\p\r\a\a\u\s\t\i\h\k\4\t\a\j\i\f\4\q\1\1\9\9\m\7\8\d\x\5\z\d\a\x\8\v\3\l\4\n\j\f\1\i\o\7\g\6\2\e\i\x\2\b\5\9\4\a\j\e\3\n\u\w\1\r\t\l\x\u\1\w\4\2\6\q\5\h\b\9\k\u\f\2\z\a\e\c\4\g\b\q\a\0\h\6\8\z\4\g\3\o\y\o\e\z\a\7\d\e\e\1\e\5\1\6\k\t\a\1\y\k\w\q\q\d\y\a\1\j\2\f\1\y\q\7\6\z\7\n\p\r\k\4\m\i\j\2\3\8\r\g\0\e\q\2\z\o\d\n\u\o\c\e\s\s\z\s\3\s\2\l\v\7\7\3\l\y\m\6\m\0\s\j\d\a\7\h\3\h\p\4\h\z\u\d\3\0\s\l\3\y\g\s\t\e\d\2\i\p\x\v\5\v\c\j\6\u\r\q\7\i\1\j\v\x\8\2\v\2\i\x\t\1\a\7\b\o\b\y\q\l\v\o\w\w\9\c\y\g\3\p\t\7\b\w\0\0\e\u\c\j\6\o\z\k\w\4\e\x\j\0\j\d\m\9\7\p\w\i ]] 00:06:35.933 00:06:35.933 real 0m1.981s 00:06:35.933 user 0m1.181s 00:06:35.933 sys 0m0.611s 00:06:35.933 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.933 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:36.191 ************************************ 00:06:36.191 START TEST dd_flag_noatime 00:06:36.191 ************************************ 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1715884069 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1715884069 00:06:36.191 18:27:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:37.124 18:27:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.124 [2024-05-16 18:27:50.550428] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:37.124 [2024-05-16 18:27:50.550542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63056 ] 00:06:37.381 [2024-05-16 18:27:50.691017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.381 [2024-05-16 18:27:50.818515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.381 [2024-05-16 18:27:50.871065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.640  Copying: 512/512 [B] (average 500 kBps) 00:06:37.640 00:06:37.640 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:37.640 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1715884069 )) 00:06:37.640 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.640 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1715884069 )) 00:06:37.640 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:37.898 [2024-05-16 18:27:51.177363] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:37.898 [2024-05-16 18:27:51.177477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63070 ] 00:06:37.898 [2024-05-16 18:27:51.315921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.156 [2024-05-16 18:27:51.432077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.156 [2024-05-16 18:27:51.483514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.415  Copying: 512/512 [B] (average 500 kBps) 00:06:38.415 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1715884071 )) 00:06:38.415 00:06:38.415 real 0m2.258s 00:06:38.415 user 0m0.736s 00:06:38.415 sys 0m0.567s 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:38.415 ************************************ 00:06:38.415 END TEST dd_flag_noatime 00:06:38.415 ************************************ 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:38.415 ************************************ 00:06:38.415 START TEST dd_flags_misc 00:06:38.415 ************************************ 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:38.415 18:27:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:38.415 [2024-05-16 18:27:51.836239] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:38.415 [2024-05-16 18:27:51.836342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63098 ] 00:06:38.674 [2024-05-16 18:27:51.977567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.674 [2024-05-16 18:27:52.112816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.674 [2024-05-16 18:27:52.170261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.940  Copying: 512/512 [B] (average 500 kBps) 00:06:38.941 00:06:39.212 18:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9a52bmr9516yz7d262u0jjssay3seuq8nyq139o7eef6vfey96um6a4jfrmhap4nsvy5ghitk3967ejgm0gk0hergll29lh1t6s2re4guvj96v3xe6sczb16tcthdjzcqqgohb2rmqaejj2abkow4soipvs3qht4g2ijqa12r8tzkk0gbco61llaylyh73uhzgfjkz552w5oe5prr9o2ycq01d60okvmgwnbng4iddd2voy476mhwi7q4zmx1mt6aw985p51x29kuezzmdl4g6eow1pmeefvuy9fs4pk2kiskf3mx0ddg9t7me3ehvjyy1k9v6wuf9hww5odv9hehbz4u2mc19u3btfqouir1givq2l7zjvpmpgeg38g7n8xrg3milbkbw5vbgo3cyqgsardw3l5arlijy293gvtj8d1l2141kz8ajeq22t8sbjqo17tpt53fnescxkco8vf1jsdef96xcys0jkgju7ak64pwo6bbyikxgxya96t80v5 == \9\a\5\2\b\m\r\9\5\1\6\y\z\7\d\2\6\2\u\0\j\j\s\s\a\y\3\s\e\u\q\8\n\y\q\1\3\9\o\7\e\e\f\6\v\f\e\y\9\6\u\m\6\a\4\j\f\r\m\h\a\p\4\n\s\v\y\5\g\h\i\t\k\3\9\6\7\e\j\g\m\0\g\k\0\h\e\r\g\l\l\2\9\l\h\1\t\6\s\2\r\e\4\g\u\v\j\9\6\v\3\x\e\6\s\c\z\b\1\6\t\c\t\h\d\j\z\c\q\q\g\o\h\b\2\r\m\q\a\e\j\j\2\a\b\k\o\w\4\s\o\i\p\v\s\3\q\h\t\4\g\2\i\j\q\a\1\2\r\8\t\z\k\k\0\g\b\c\o\6\1\l\l\a\y\l\y\h\7\3\u\h\z\g\f\j\k\z\5\5\2\w\5\o\e\5\p\r\r\9\o\2\y\c\q\0\1\d\6\0\o\k\v\m\g\w\n\b\n\g\4\i\d\d\d\2\v\o\y\4\7\6\m\h\w\i\7\q\4\z\m\x\1\m\t\6\a\w\9\8\5\p\5\1\x\2\9\k\u\e\z\z\m\d\l\4\g\6\e\o\w\1\p\m\e\e\f\v\u\y\9\f\s\4\p\k\2\k\i\s\k\f\3\m\x\0\d\d\g\9\t\7\m\e\3\e\h\v\j\y\y\1\k\9\v\6\w\u\f\9\h\w\w\5\o\d\v\9\h\e\h\b\z\4\u\2\m\c\1\9\u\3\b\t\f\q\o\u\i\r\1\g\i\v\q\2\l\7\z\j\v\p\m\p\g\e\g\3\8\g\7\n\8\x\r\g\3\m\i\l\b\k\b\w\5\v\b\g\o\3\c\y\q\g\s\a\r\d\w\3\l\5\a\r\l\i\j\y\2\9\3\g\v\t\j\8\d\1\l\2\1\4\1\k\z\8\a\j\e\q\2\2\t\8\s\b\j\q\o\1\7\t\p\t\5\3\f\n\e\s\c\x\k\c\o\8\v\f\1\j\s\d\e\f\9\6\x\c\y\s\0\j\k\g\j\u\7\a\k\6\4\p\w\o\6\b\b\y\i\k\x\g\x\y\a\9\6\t\8\0\v\5 ]] 00:06:39.212 18:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.212 18:27:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:39.212 [2024-05-16 18:27:52.510748] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:39.212 [2024-05-16 18:27:52.510920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63108 ] 00:06:39.212 [2024-05-16 18:27:52.654046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.470 [2024-05-16 18:27:52.769076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.470 [2024-05-16 18:27:52.821176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.729  Copying: 512/512 [B] (average 500 kBps) 00:06:39.729 00:06:39.729 18:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9a52bmr9516yz7d262u0jjssay3seuq8nyq139o7eef6vfey96um6a4jfrmhap4nsvy5ghitk3967ejgm0gk0hergll29lh1t6s2re4guvj96v3xe6sczb16tcthdjzcqqgohb2rmqaejj2abkow4soipvs3qht4g2ijqa12r8tzkk0gbco61llaylyh73uhzgfjkz552w5oe5prr9o2ycq01d60okvmgwnbng4iddd2voy476mhwi7q4zmx1mt6aw985p51x29kuezzmdl4g6eow1pmeefvuy9fs4pk2kiskf3mx0ddg9t7me3ehvjyy1k9v6wuf9hww5odv9hehbz4u2mc19u3btfqouir1givq2l7zjvpmpgeg38g7n8xrg3milbkbw5vbgo3cyqgsardw3l5arlijy293gvtj8d1l2141kz8ajeq22t8sbjqo17tpt53fnescxkco8vf1jsdef96xcys0jkgju7ak64pwo6bbyikxgxya96t80v5 == \9\a\5\2\b\m\r\9\5\1\6\y\z\7\d\2\6\2\u\0\j\j\s\s\a\y\3\s\e\u\q\8\n\y\q\1\3\9\o\7\e\e\f\6\v\f\e\y\9\6\u\m\6\a\4\j\f\r\m\h\a\p\4\n\s\v\y\5\g\h\i\t\k\3\9\6\7\e\j\g\m\0\g\k\0\h\e\r\g\l\l\2\9\l\h\1\t\6\s\2\r\e\4\g\u\v\j\9\6\v\3\x\e\6\s\c\z\b\1\6\t\c\t\h\d\j\z\c\q\q\g\o\h\b\2\r\m\q\a\e\j\j\2\a\b\k\o\w\4\s\o\i\p\v\s\3\q\h\t\4\g\2\i\j\q\a\1\2\r\8\t\z\k\k\0\g\b\c\o\6\1\l\l\a\y\l\y\h\7\3\u\h\z\g\f\j\k\z\5\5\2\w\5\o\e\5\p\r\r\9\o\2\y\c\q\0\1\d\6\0\o\k\v\m\g\w\n\b\n\g\4\i\d\d\d\2\v\o\y\4\7\6\m\h\w\i\7\q\4\z\m\x\1\m\t\6\a\w\9\8\5\p\5\1\x\2\9\k\u\e\z\z\m\d\l\4\g\6\e\o\w\1\p\m\e\e\f\v\u\y\9\f\s\4\p\k\2\k\i\s\k\f\3\m\x\0\d\d\g\9\t\7\m\e\3\e\h\v\j\y\y\1\k\9\v\6\w\u\f\9\h\w\w\5\o\d\v\9\h\e\h\b\z\4\u\2\m\c\1\9\u\3\b\t\f\q\o\u\i\r\1\g\i\v\q\2\l\7\z\j\v\p\m\p\g\e\g\3\8\g\7\n\8\x\r\g\3\m\i\l\b\k\b\w\5\v\b\g\o\3\c\y\q\g\s\a\r\d\w\3\l\5\a\r\l\i\j\y\2\9\3\g\v\t\j\8\d\1\l\2\1\4\1\k\z\8\a\j\e\q\2\2\t\8\s\b\j\q\o\1\7\t\p\t\5\3\f\n\e\s\c\x\k\c\o\8\v\f\1\j\s\d\e\f\9\6\x\c\y\s\0\j\k\g\j\u\7\a\k\6\4\p\w\o\6\b\b\y\i\k\x\g\x\y\a\9\6\t\8\0\v\5 ]] 00:06:39.729 18:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:39.729 18:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:39.729 [2024-05-16 18:27:53.138570] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:39.729 [2024-05-16 18:27:53.138674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63123 ] 00:06:39.989 [2024-05-16 18:27:53.276511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.989 [2024-05-16 18:27:53.428838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.248 [2024-05-16 18:27:53.500983] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.507  Copying: 512/512 [B] (average 125 kBps) 00:06:40.507 00:06:40.507 18:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9a52bmr9516yz7d262u0jjssay3seuq8nyq139o7eef6vfey96um6a4jfrmhap4nsvy5ghitk3967ejgm0gk0hergll29lh1t6s2re4guvj96v3xe6sczb16tcthdjzcqqgohb2rmqaejj2abkow4soipvs3qht4g2ijqa12r8tzkk0gbco61llaylyh73uhzgfjkz552w5oe5prr9o2ycq01d60okvmgwnbng4iddd2voy476mhwi7q4zmx1mt6aw985p51x29kuezzmdl4g6eow1pmeefvuy9fs4pk2kiskf3mx0ddg9t7me3ehvjyy1k9v6wuf9hww5odv9hehbz4u2mc19u3btfqouir1givq2l7zjvpmpgeg38g7n8xrg3milbkbw5vbgo3cyqgsardw3l5arlijy293gvtj8d1l2141kz8ajeq22t8sbjqo17tpt53fnescxkco8vf1jsdef96xcys0jkgju7ak64pwo6bbyikxgxya96t80v5 == \9\a\5\2\b\m\r\9\5\1\6\y\z\7\d\2\6\2\u\0\j\j\s\s\a\y\3\s\e\u\q\8\n\y\q\1\3\9\o\7\e\e\f\6\v\f\e\y\9\6\u\m\6\a\4\j\f\r\m\h\a\p\4\n\s\v\y\5\g\h\i\t\k\3\9\6\7\e\j\g\m\0\g\k\0\h\e\r\g\l\l\2\9\l\h\1\t\6\s\2\r\e\4\g\u\v\j\9\6\v\3\x\e\6\s\c\z\b\1\6\t\c\t\h\d\j\z\c\q\q\g\o\h\b\2\r\m\q\a\e\j\j\2\a\b\k\o\w\4\s\o\i\p\v\s\3\q\h\t\4\g\2\i\j\q\a\1\2\r\8\t\z\k\k\0\g\b\c\o\6\1\l\l\a\y\l\y\h\7\3\u\h\z\g\f\j\k\z\5\5\2\w\5\o\e\5\p\r\r\9\o\2\y\c\q\0\1\d\6\0\o\k\v\m\g\w\n\b\n\g\4\i\d\d\d\2\v\o\y\4\7\6\m\h\w\i\7\q\4\z\m\x\1\m\t\6\a\w\9\8\5\p\5\1\x\2\9\k\u\e\z\z\m\d\l\4\g\6\e\o\w\1\p\m\e\e\f\v\u\y\9\f\s\4\p\k\2\k\i\s\k\f\3\m\x\0\d\d\g\9\t\7\m\e\3\e\h\v\j\y\y\1\k\9\v\6\w\u\f\9\h\w\w\5\o\d\v\9\h\e\h\b\z\4\u\2\m\c\1\9\u\3\b\t\f\q\o\u\i\r\1\g\i\v\q\2\l\7\z\j\v\p\m\p\g\e\g\3\8\g\7\n\8\x\r\g\3\m\i\l\b\k\b\w\5\v\b\g\o\3\c\y\q\g\s\a\r\d\w\3\l\5\a\r\l\i\j\y\2\9\3\g\v\t\j\8\d\1\l\2\1\4\1\k\z\8\a\j\e\q\2\2\t\8\s\b\j\q\o\1\7\t\p\t\5\3\f\n\e\s\c\x\k\c\o\8\v\f\1\j\s\d\e\f\9\6\x\c\y\s\0\j\k\g\j\u\7\a\k\6\4\p\w\o\6\b\b\y\i\k\x\g\x\y\a\9\6\t\8\0\v\5 ]] 00:06:40.507 18:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:40.507 18:27:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:40.507 [2024-05-16 18:27:53.901943] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:40.507 [2024-05-16 18:27:53.902053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63132 ] 00:06:40.766 [2024-05-16 18:27:54.040691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.766 [2024-05-16 18:27:54.187889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.766 [2024-05-16 18:27:54.260025] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.290  Copying: 512/512 [B] (average 166 kBps) 00:06:41.290 00:06:41.290 18:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9a52bmr9516yz7d262u0jjssay3seuq8nyq139o7eef6vfey96um6a4jfrmhap4nsvy5ghitk3967ejgm0gk0hergll29lh1t6s2re4guvj96v3xe6sczb16tcthdjzcqqgohb2rmqaejj2abkow4soipvs3qht4g2ijqa12r8tzkk0gbco61llaylyh73uhzgfjkz552w5oe5prr9o2ycq01d60okvmgwnbng4iddd2voy476mhwi7q4zmx1mt6aw985p51x29kuezzmdl4g6eow1pmeefvuy9fs4pk2kiskf3mx0ddg9t7me3ehvjyy1k9v6wuf9hww5odv9hehbz4u2mc19u3btfqouir1givq2l7zjvpmpgeg38g7n8xrg3milbkbw5vbgo3cyqgsardw3l5arlijy293gvtj8d1l2141kz8ajeq22t8sbjqo17tpt53fnescxkco8vf1jsdef96xcys0jkgju7ak64pwo6bbyikxgxya96t80v5 == \9\a\5\2\b\m\r\9\5\1\6\y\z\7\d\2\6\2\u\0\j\j\s\s\a\y\3\s\e\u\q\8\n\y\q\1\3\9\o\7\e\e\f\6\v\f\e\y\9\6\u\m\6\a\4\j\f\r\m\h\a\p\4\n\s\v\y\5\g\h\i\t\k\3\9\6\7\e\j\g\m\0\g\k\0\h\e\r\g\l\l\2\9\l\h\1\t\6\s\2\r\e\4\g\u\v\j\9\6\v\3\x\e\6\s\c\z\b\1\6\t\c\t\h\d\j\z\c\q\q\g\o\h\b\2\r\m\q\a\e\j\j\2\a\b\k\o\w\4\s\o\i\p\v\s\3\q\h\t\4\g\2\i\j\q\a\1\2\r\8\t\z\k\k\0\g\b\c\o\6\1\l\l\a\y\l\y\h\7\3\u\h\z\g\f\j\k\z\5\5\2\w\5\o\e\5\p\r\r\9\o\2\y\c\q\0\1\d\6\0\o\k\v\m\g\w\n\b\n\g\4\i\d\d\d\2\v\o\y\4\7\6\m\h\w\i\7\q\4\z\m\x\1\m\t\6\a\w\9\8\5\p\5\1\x\2\9\k\u\e\z\z\m\d\l\4\g\6\e\o\w\1\p\m\e\e\f\v\u\y\9\f\s\4\p\k\2\k\i\s\k\f\3\m\x\0\d\d\g\9\t\7\m\e\3\e\h\v\j\y\y\1\k\9\v\6\w\u\f\9\h\w\w\5\o\d\v\9\h\e\h\b\z\4\u\2\m\c\1\9\u\3\b\t\f\q\o\u\i\r\1\g\i\v\q\2\l\7\z\j\v\p\m\p\g\e\g\3\8\g\7\n\8\x\r\g\3\m\i\l\b\k\b\w\5\v\b\g\o\3\c\y\q\g\s\a\r\d\w\3\l\5\a\r\l\i\j\y\2\9\3\g\v\t\j\8\d\1\l\2\1\4\1\k\z\8\a\j\e\q\2\2\t\8\s\b\j\q\o\1\7\t\p\t\5\3\f\n\e\s\c\x\k\c\o\8\v\f\1\j\s\d\e\f\9\6\x\c\y\s\0\j\k\g\j\u\7\a\k\6\4\p\w\o\6\b\b\y\i\k\x\g\x\y\a\9\6\t\8\0\v\5 ]] 00:06:41.290 18:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:41.290 18:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:41.290 18:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:41.290 18:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:41.290 18:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:41.290 18:27:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:41.290 [2024-05-16 18:27:54.663390] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:41.290 [2024-05-16 18:27:54.663502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63146 ] 00:06:41.549 [2024-05-16 18:27:54.797046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.549 [2024-05-16 18:27:54.943521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.549 [2024-05-16 18:27:55.017520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.068  Copying: 512/512 [B] (average 500 kBps) 00:06:42.068 00:06:42.068 18:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ aatffbai8mzckr7tlp1dsmc4nd7arke2xehhtsvooqzfm4pdk5elfbwudphrktt7nvl4e3wn0u56c1h814idt06s5eo1x8akdj68uj9rcfks663mpass51w6742u54c6wnkkmflgckpilwckifyvx6h1ozdblc57yohyqbqlg3yim3sp0x4peicaq2uu29b8esfdi9q3gjwm2o9hmtlk9a8afbw75y9ib1qnndai3ztuw5nb5ku4y2n8gkanbuer4gkfoq0wsxlzl44gkn0mrmdddtl76cqoo1s0zcjf0gt7yr61gqnziuwpyvneon3iip54ywq5drl84mir6a9yg1ercan3szb96on6dvz84ct7foeqs7sh4fahoykwvha9gvpzfs6ghh3hl5tr71jewpgxvcal9dipmwfime1vcjd04n1xfiuvqhqjlim6yzwv5cys9ovr2i2q2n7861tdsqhrdpd1ncgdd0rez8l085w5amkej7g5a0vm5nqm09ub == \a\a\t\f\f\b\a\i\8\m\z\c\k\r\7\t\l\p\1\d\s\m\c\4\n\d\7\a\r\k\e\2\x\e\h\h\t\s\v\o\o\q\z\f\m\4\p\d\k\5\e\l\f\b\w\u\d\p\h\r\k\t\t\7\n\v\l\4\e\3\w\n\0\u\5\6\c\1\h\8\1\4\i\d\t\0\6\s\5\e\o\1\x\8\a\k\d\j\6\8\u\j\9\r\c\f\k\s\6\6\3\m\p\a\s\s\5\1\w\6\7\4\2\u\5\4\c\6\w\n\k\k\m\f\l\g\c\k\p\i\l\w\c\k\i\f\y\v\x\6\h\1\o\z\d\b\l\c\5\7\y\o\h\y\q\b\q\l\g\3\y\i\m\3\s\p\0\x\4\p\e\i\c\a\q\2\u\u\2\9\b\8\e\s\f\d\i\9\q\3\g\j\w\m\2\o\9\h\m\t\l\k\9\a\8\a\f\b\w\7\5\y\9\i\b\1\q\n\n\d\a\i\3\z\t\u\w\5\n\b\5\k\u\4\y\2\n\8\g\k\a\n\b\u\e\r\4\g\k\f\o\q\0\w\s\x\l\z\l\4\4\g\k\n\0\m\r\m\d\d\d\t\l\7\6\c\q\o\o\1\s\0\z\c\j\f\0\g\t\7\y\r\6\1\g\q\n\z\i\u\w\p\y\v\n\e\o\n\3\i\i\p\5\4\y\w\q\5\d\r\l\8\4\m\i\r\6\a\9\y\g\1\e\r\c\a\n\3\s\z\b\9\6\o\n\6\d\v\z\8\4\c\t\7\f\o\e\q\s\7\s\h\4\f\a\h\o\y\k\w\v\h\a\9\g\v\p\z\f\s\6\g\h\h\3\h\l\5\t\r\7\1\j\e\w\p\g\x\v\c\a\l\9\d\i\p\m\w\f\i\m\e\1\v\c\j\d\0\4\n\1\x\f\i\u\v\q\h\q\j\l\i\m\6\y\z\w\v\5\c\y\s\9\o\v\r\2\i\2\q\2\n\7\8\6\1\t\d\s\q\h\r\d\p\d\1\n\c\g\d\d\0\r\e\z\8\l\0\8\5\w\5\a\m\k\e\j\7\g\5\a\0\v\m\5\n\q\m\0\9\u\b ]] 00:06:42.068 18:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.069 18:27:55 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:42.069 [2024-05-16 18:27:55.412156] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:42.069 [2024-05-16 18:27:55.412272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63157 ] 00:06:42.069 [2024-05-16 18:27:55.550114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.327 [2024-05-16 18:27:55.697662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.327 [2024-05-16 18:27:55.770322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.894  Copying: 512/512 [B] (average 500 kBps) 00:06:42.894 00:06:42.894 18:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ aatffbai8mzckr7tlp1dsmc4nd7arke2xehhtsvooqzfm4pdk5elfbwudphrktt7nvl4e3wn0u56c1h814idt06s5eo1x8akdj68uj9rcfks663mpass51w6742u54c6wnkkmflgckpilwckifyvx6h1ozdblc57yohyqbqlg3yim3sp0x4peicaq2uu29b8esfdi9q3gjwm2o9hmtlk9a8afbw75y9ib1qnndai3ztuw5nb5ku4y2n8gkanbuer4gkfoq0wsxlzl44gkn0mrmdddtl76cqoo1s0zcjf0gt7yr61gqnziuwpyvneon3iip54ywq5drl84mir6a9yg1ercan3szb96on6dvz84ct7foeqs7sh4fahoykwvha9gvpzfs6ghh3hl5tr71jewpgxvcal9dipmwfime1vcjd04n1xfiuvqhqjlim6yzwv5cys9ovr2i2q2n7861tdsqhrdpd1ncgdd0rez8l085w5amkej7g5a0vm5nqm09ub == \a\a\t\f\f\b\a\i\8\m\z\c\k\r\7\t\l\p\1\d\s\m\c\4\n\d\7\a\r\k\e\2\x\e\h\h\t\s\v\o\o\q\z\f\m\4\p\d\k\5\e\l\f\b\w\u\d\p\h\r\k\t\t\7\n\v\l\4\e\3\w\n\0\u\5\6\c\1\h\8\1\4\i\d\t\0\6\s\5\e\o\1\x\8\a\k\d\j\6\8\u\j\9\r\c\f\k\s\6\6\3\m\p\a\s\s\5\1\w\6\7\4\2\u\5\4\c\6\w\n\k\k\m\f\l\g\c\k\p\i\l\w\c\k\i\f\y\v\x\6\h\1\o\z\d\b\l\c\5\7\y\o\h\y\q\b\q\l\g\3\y\i\m\3\s\p\0\x\4\p\e\i\c\a\q\2\u\u\2\9\b\8\e\s\f\d\i\9\q\3\g\j\w\m\2\o\9\h\m\t\l\k\9\a\8\a\f\b\w\7\5\y\9\i\b\1\q\n\n\d\a\i\3\z\t\u\w\5\n\b\5\k\u\4\y\2\n\8\g\k\a\n\b\u\e\r\4\g\k\f\o\q\0\w\s\x\l\z\l\4\4\g\k\n\0\m\r\m\d\d\d\t\l\7\6\c\q\o\o\1\s\0\z\c\j\f\0\g\t\7\y\r\6\1\g\q\n\z\i\u\w\p\y\v\n\e\o\n\3\i\i\p\5\4\y\w\q\5\d\r\l\8\4\m\i\r\6\a\9\y\g\1\e\r\c\a\n\3\s\z\b\9\6\o\n\6\d\v\z\8\4\c\t\7\f\o\e\q\s\7\s\h\4\f\a\h\o\y\k\w\v\h\a\9\g\v\p\z\f\s\6\g\h\h\3\h\l\5\t\r\7\1\j\e\w\p\g\x\v\c\a\l\9\d\i\p\m\w\f\i\m\e\1\v\c\j\d\0\4\n\1\x\f\i\u\v\q\h\q\j\l\i\m\6\y\z\w\v\5\c\y\s\9\o\v\r\2\i\2\q\2\n\7\8\6\1\t\d\s\q\h\r\d\p\d\1\n\c\g\d\d\0\r\e\z\8\l\0\8\5\w\5\a\m\k\e\j\7\g\5\a\0\v\m\5\n\q\m\0\9\u\b ]] 00:06:42.894 18:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:42.894 18:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:42.894 [2024-05-16 18:27:56.172190] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:42.894 [2024-05-16 18:27:56.172309] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63172 ] 00:06:42.894 [2024-05-16 18:27:56.309845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.153 [2024-05-16 18:27:56.458154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.153 [2024-05-16 18:27:56.532166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.414  Copying: 512/512 [B] (average 500 kBps) 00:06:43.414 00:06:43.414 18:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ aatffbai8mzckr7tlp1dsmc4nd7arke2xehhtsvooqzfm4pdk5elfbwudphrktt7nvl4e3wn0u56c1h814idt06s5eo1x8akdj68uj9rcfks663mpass51w6742u54c6wnkkmflgckpilwckifyvx6h1ozdblc57yohyqbqlg3yim3sp0x4peicaq2uu29b8esfdi9q3gjwm2o9hmtlk9a8afbw75y9ib1qnndai3ztuw5nb5ku4y2n8gkanbuer4gkfoq0wsxlzl44gkn0mrmdddtl76cqoo1s0zcjf0gt7yr61gqnziuwpyvneon3iip54ywq5drl84mir6a9yg1ercan3szb96on6dvz84ct7foeqs7sh4fahoykwvha9gvpzfs6ghh3hl5tr71jewpgxvcal9dipmwfime1vcjd04n1xfiuvqhqjlim6yzwv5cys9ovr2i2q2n7861tdsqhrdpd1ncgdd0rez8l085w5amkej7g5a0vm5nqm09ub == \a\a\t\f\f\b\a\i\8\m\z\c\k\r\7\t\l\p\1\d\s\m\c\4\n\d\7\a\r\k\e\2\x\e\h\h\t\s\v\o\o\q\z\f\m\4\p\d\k\5\e\l\f\b\w\u\d\p\h\r\k\t\t\7\n\v\l\4\e\3\w\n\0\u\5\6\c\1\h\8\1\4\i\d\t\0\6\s\5\e\o\1\x\8\a\k\d\j\6\8\u\j\9\r\c\f\k\s\6\6\3\m\p\a\s\s\5\1\w\6\7\4\2\u\5\4\c\6\w\n\k\k\m\f\l\g\c\k\p\i\l\w\c\k\i\f\y\v\x\6\h\1\o\z\d\b\l\c\5\7\y\o\h\y\q\b\q\l\g\3\y\i\m\3\s\p\0\x\4\p\e\i\c\a\q\2\u\u\2\9\b\8\e\s\f\d\i\9\q\3\g\j\w\m\2\o\9\h\m\t\l\k\9\a\8\a\f\b\w\7\5\y\9\i\b\1\q\n\n\d\a\i\3\z\t\u\w\5\n\b\5\k\u\4\y\2\n\8\g\k\a\n\b\u\e\r\4\g\k\f\o\q\0\w\s\x\l\z\l\4\4\g\k\n\0\m\r\m\d\d\d\t\l\7\6\c\q\o\o\1\s\0\z\c\j\f\0\g\t\7\y\r\6\1\g\q\n\z\i\u\w\p\y\v\n\e\o\n\3\i\i\p\5\4\y\w\q\5\d\r\l\8\4\m\i\r\6\a\9\y\g\1\e\r\c\a\n\3\s\z\b\9\6\o\n\6\d\v\z\8\4\c\t\7\f\o\e\q\s\7\s\h\4\f\a\h\o\y\k\w\v\h\a\9\g\v\p\z\f\s\6\g\h\h\3\h\l\5\t\r\7\1\j\e\w\p\g\x\v\c\a\l\9\d\i\p\m\w\f\i\m\e\1\v\c\j\d\0\4\n\1\x\f\i\u\v\q\h\q\j\l\i\m\6\y\z\w\v\5\c\y\s\9\o\v\r\2\i\2\q\2\n\7\8\6\1\t\d\s\q\h\r\d\p\d\1\n\c\g\d\d\0\r\e\z\8\l\0\8\5\w\5\a\m\k\e\j\7\g\5\a\0\v\m\5\n\q\m\0\9\u\b ]] 00:06:43.414 18:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:43.414 18:27:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:43.674 [2024-05-16 18:27:56.934497] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:43.674 [2024-05-16 18:27:56.934632] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63181 ] 00:06:43.674 [2024-05-16 18:27:57.073292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.933 [2024-05-16 18:27:57.239654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.933 [2024-05-16 18:27:57.312184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.191  Copying: 512/512 [B] (average 166 kBps) 00:06:44.191 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ aatffbai8mzckr7tlp1dsmc4nd7arke2xehhtsvooqzfm4pdk5elfbwudphrktt7nvl4e3wn0u56c1h814idt06s5eo1x8akdj68uj9rcfks663mpass51w6742u54c6wnkkmflgckpilwckifyvx6h1ozdblc57yohyqbqlg3yim3sp0x4peicaq2uu29b8esfdi9q3gjwm2o9hmtlk9a8afbw75y9ib1qnndai3ztuw5nb5ku4y2n8gkanbuer4gkfoq0wsxlzl44gkn0mrmdddtl76cqoo1s0zcjf0gt7yr61gqnziuwpyvneon3iip54ywq5drl84mir6a9yg1ercan3szb96on6dvz84ct7foeqs7sh4fahoykwvha9gvpzfs6ghh3hl5tr71jewpgxvcal9dipmwfime1vcjd04n1xfiuvqhqjlim6yzwv5cys9ovr2i2q2n7861tdsqhrdpd1ncgdd0rez8l085w5amkej7g5a0vm5nqm09ub == \a\a\t\f\f\b\a\i\8\m\z\c\k\r\7\t\l\p\1\d\s\m\c\4\n\d\7\a\r\k\e\2\x\e\h\h\t\s\v\o\o\q\z\f\m\4\p\d\k\5\e\l\f\b\w\u\d\p\h\r\k\t\t\7\n\v\l\4\e\3\w\n\0\u\5\6\c\1\h\8\1\4\i\d\t\0\6\s\5\e\o\1\x\8\a\k\d\j\6\8\u\j\9\r\c\f\k\s\6\6\3\m\p\a\s\s\5\1\w\6\7\4\2\u\5\4\c\6\w\n\k\k\m\f\l\g\c\k\p\i\l\w\c\k\i\f\y\v\x\6\h\1\o\z\d\b\l\c\5\7\y\o\h\y\q\b\q\l\g\3\y\i\m\3\s\p\0\x\4\p\e\i\c\a\q\2\u\u\2\9\b\8\e\s\f\d\i\9\q\3\g\j\w\m\2\o\9\h\m\t\l\k\9\a\8\a\f\b\w\7\5\y\9\i\b\1\q\n\n\d\a\i\3\z\t\u\w\5\n\b\5\k\u\4\y\2\n\8\g\k\a\n\b\u\e\r\4\g\k\f\o\q\0\w\s\x\l\z\l\4\4\g\k\n\0\m\r\m\d\d\d\t\l\7\6\c\q\o\o\1\s\0\z\c\j\f\0\g\t\7\y\r\6\1\g\q\n\z\i\u\w\p\y\v\n\e\o\n\3\i\i\p\5\4\y\w\q\5\d\r\l\8\4\m\i\r\6\a\9\y\g\1\e\r\c\a\n\3\s\z\b\9\6\o\n\6\d\v\z\8\4\c\t\7\f\o\e\q\s\7\s\h\4\f\a\h\o\y\k\w\v\h\a\9\g\v\p\z\f\s\6\g\h\h\3\h\l\5\t\r\7\1\j\e\w\p\g\x\v\c\a\l\9\d\i\p\m\w\f\i\m\e\1\v\c\j\d\0\4\n\1\x\f\i\u\v\q\h\q\j\l\i\m\6\y\z\w\v\5\c\y\s\9\o\v\r\2\i\2\q\2\n\7\8\6\1\t\d\s\q\h\r\d\p\d\1\n\c\g\d\d\0\r\e\z\8\l\0\8\5\w\5\a\m\k\e\j\7\g\5\a\0\v\m\5\n\q\m\0\9\u\b ]] 00:06:44.451 00:06:44.451 real 0m5.922s 00:06:44.451 user 0m3.648s 00:06:44.451 sys 0m2.747s 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:44.451 ************************************ 00:06:44.451 END TEST dd_flags_misc 00:06:44.451 ************************************ 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:44.451 * Second test run, disabling liburing, forcing AIO 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:44.451 ************************************ 00:06:44.451 START TEST dd_flag_append_forced_aio 00:06:44.451 ************************************ 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=j5yk71x7uvuc3aoj5j5sb5phe1o4onkn 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=br39zu9i6isipb7442npa8934s63erry 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s j5yk71x7uvuc3aoj5j5sb5phe1o4onkn 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s br39zu9i6isipb7442npa8934s63erry 00:06:44.451 18:27:57 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:44.451 [2024-05-16 18:27:57.824733] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:44.451 [2024-05-16 18:27:57.824859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63210 ] 00:06:44.711 [2024-05-16 18:27:57.964535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.711 [2024-05-16 18:27:58.122781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.711 [2024-05-16 18:27:58.195197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.229  Copying: 32/32 [B] (average 31 kBps) 00:06:45.229 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ br39zu9i6isipb7442npa8934s63erryj5yk71x7uvuc3aoj5j5sb5phe1o4onkn == \b\r\3\9\z\u\9\i\6\i\s\i\p\b\7\4\4\2\n\p\a\8\9\3\4\s\6\3\e\r\r\y\j\5\y\k\7\1\x\7\u\v\u\c\3\a\o\j\5\j\5\s\b\5\p\h\e\1\o\4\o\n\k\n ]] 00:06:45.229 00:06:45.229 real 0m0.809s 00:06:45.229 user 0m0.495s 00:06:45.229 sys 0m0.191s 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:45.229 ************************************ 00:06:45.229 END TEST dd_flag_append_forced_aio 00:06:45.229 ************************************ 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:45.229 ************************************ 00:06:45.229 START TEST dd_flag_directory_forced_aio 00:06:45.229 ************************************ 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.229 18:27:58 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:45.229 [2024-05-16 18:27:58.668015] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:45.229 [2024-05-16 18:27:58.668135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63242 ] 00:06:45.488 [2024-05-16 18:27:58.810844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.488 [2024-05-16 18:27:58.958292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.746 [2024-05-16 18:27:59.030376] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.746 [2024-05-16 18:27:59.076046] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:45.746 [2024-05-16 18:27:59.076111] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:45.747 [2024-05-16 18:27:59.076129] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.747 [2024-05-16 18:27:59.239474] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.032 18:27:59 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:46.032 [2024-05-16 18:27:59.420290] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:46.032 [2024-05-16 18:27:59.420393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63257 ] 00:06:46.290 [2024-05-16 18:27:59.553264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.290 [2024-05-16 18:27:59.698742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.290 [2024-05-16 18:27:59.770747] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.549 [2024-05-16 18:27:59.819265] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.549 [2024-05-16 18:27:59.819331] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:46.549 [2024-05-16 18:27:59.819347] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.549 [2024-05-16 18:27:59.985575] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.808 00:06:46.808 real 0m1.507s 00:06:46.808 user 0m0.923s 00:06:46.808 sys 0m0.370s 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:46.808 ************************************ 00:06:46.808 END TEST dd_flag_directory_forced_aio 00:06:46.808 ************************************ 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:46.808 ************************************ 00:06:46.808 START TEST dd_flag_nofollow_forced_aio 00:06:46.808 ************************************ 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.808 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.808 [2024-05-16 18:28:00.227743] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:46.808 [2024-05-16 18:28:00.227861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63291 ] 00:06:47.067 [2024-05-16 18:28:00.361922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.067 [2024-05-16 18:28:00.507280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.326 [2024-05-16 18:28:00.580480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.326 [2024-05-16 18:28:00.626779] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:47.326 [2024-05-16 18:28:00.626861] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:47.326 [2024-05-16 18:28:00.626879] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.326 [2024-05-16 18:28:00.792781] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.585 18:28:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:47.585 [2024-05-16 18:28:00.984609] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:47.585 [2024-05-16 18:28:00.984713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63295 ] 00:06:47.844 [2024-05-16 18:28:01.119612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.844 [2024-05-16 18:28:01.266202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.844 [2024-05-16 18:28:01.338908] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.102 [2024-05-16 18:28:01.385981] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:48.102 [2024-05-16 18:28:01.386044] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:48.102 [2024-05-16 18:28:01.386061] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.102 [2024-05-16 18:28:01.554727] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:48.359 18:28:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.359 [2024-05-16 18:28:01.742535] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:48.359 [2024-05-16 18:28:01.742649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63311 ] 00:06:48.617 [2024-05-16 18:28:01.875852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.617 [2024-05-16 18:28:02.023874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.617 [2024-05-16 18:28:02.096250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.135  Copying: 512/512 [B] (average 500 kBps) 00:06:49.135 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ xhi7rndbaf9669hdbhnj6osy2iz63bsdj4gjrm03b3c4piocgsusa9dhsbxdcf66lezoz0wfu5r6kkbkeh173hbmg6v23gspiwq30vl45pj1trde25zggbp96ss1csve9r0ucj18f7u0nuq5raie1uf0iizkennif02j6ku52iafnqpmhd9wtczer8lo12ulxv014u9iwbbuqynz1qzan9mvf5wh00bqn4p85hzxlpnjpv3wud3kmzymrqjiceyu4667e0y1swu0nryyjbukhq17epvfo4zjjycxipdljj0xjnyrnz7cc02argad336a9b3ugk2416zlb460h7pn9blm1ksfo9upcv3i0brx87fb1f3femejcjlmgwpmnokj7tcaq5n59xn7qrdiac89dxgdzdxygs25rn3d5hkmkidzpso9rlwhjr5dv1d239i7rh45kqs1phe24abzn9bmcffbygaxbpct6cdhle08pfnc5rsis0f1f3d4cx1chbkw == \x\h\i\7\r\n\d\b\a\f\9\6\6\9\h\d\b\h\n\j\6\o\s\y\2\i\z\6\3\b\s\d\j\4\g\j\r\m\0\3\b\3\c\4\p\i\o\c\g\s\u\s\a\9\d\h\s\b\x\d\c\f\6\6\l\e\z\o\z\0\w\f\u\5\r\6\k\k\b\k\e\h\1\7\3\h\b\m\g\6\v\2\3\g\s\p\i\w\q\3\0\v\l\4\5\p\j\1\t\r\d\e\2\5\z\g\g\b\p\9\6\s\s\1\c\s\v\e\9\r\0\u\c\j\1\8\f\7\u\0\n\u\q\5\r\a\i\e\1\u\f\0\i\i\z\k\e\n\n\i\f\0\2\j\6\k\u\5\2\i\a\f\n\q\p\m\h\d\9\w\t\c\z\e\r\8\l\o\1\2\u\l\x\v\0\1\4\u\9\i\w\b\b\u\q\y\n\z\1\q\z\a\n\9\m\v\f\5\w\h\0\0\b\q\n\4\p\8\5\h\z\x\l\p\n\j\p\v\3\w\u\d\3\k\m\z\y\m\r\q\j\i\c\e\y\u\4\6\6\7\e\0\y\1\s\w\u\0\n\r\y\y\j\b\u\k\h\q\1\7\e\p\v\f\o\4\z\j\j\y\c\x\i\p\d\l\j\j\0\x\j\n\y\r\n\z\7\c\c\0\2\a\r\g\a\d\3\3\6\a\9\b\3\u\g\k\2\4\1\6\z\l\b\4\6\0\h\7\p\n\9\b\l\m\1\k\s\f\o\9\u\p\c\v\3\i\0\b\r\x\8\7\f\b\1\f\3\f\e\m\e\j\c\j\l\m\g\w\p\m\n\o\k\j\7\t\c\a\q\5\n\5\9\x\n\7\q\r\d\i\a\c\8\9\d\x\g\d\z\d\x\y\g\s\2\5\r\n\3\d\5\h\k\m\k\i\d\z\p\s\o\9\r\l\w\h\j\r\5\d\v\1\d\2\3\9\i\7\r\h\4\5\k\q\s\1\p\h\e\2\4\a\b\z\n\9\b\m\c\f\f\b\y\g\a\x\b\p\c\t\6\c\d\h\l\e\0\8\p\f\n\c\5\r\s\i\s\0\f\1\f\3\d\4\c\x\1\c\h\b\k\w ]] 00:06:49.135 00:06:49.135 real 0m2.309s 00:06:49.135 user 0m1.430s 00:06:49.135 sys 0m0.543s 00:06:49.135 ************************************ 00:06:49.135 END TEST dd_flag_nofollow_forced_aio 00:06:49.135 ************************************ 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 ************************************ 00:06:49.135 START TEST dd_flag_noatime_forced_aio 00:06:49.135 ************************************ 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1715884082 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1715884082 00:06:49.135 18:28:02 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:50.071 18:28:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:50.327 [2024-05-16 18:28:03.588733] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:50.327 [2024-05-16 18:28:03.588854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:06:50.327 [2024-05-16 18:28:03.725200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.591 [2024-05-16 18:28:03.873077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.591 [2024-05-16 18:28:03.946463] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.849  Copying: 512/512 [B] (average 500 kBps) 00:06:50.849 00:06:50.849 18:28:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.107 18:28:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1715884082 )) 00:06:51.107 18:28:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.107 18:28:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1715884082 )) 00:06:51.107 18:28:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.107 [2024-05-16 18:28:04.404320] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:51.107 [2024-05-16 18:28:04.404428] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63376 ] 00:06:51.107 [2024-05-16 18:28:04.537714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.366 [2024-05-16 18:28:04.685536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.366 [2024-05-16 18:28:04.758203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.934  Copying: 512/512 [B] (average 500 kBps) 00:06:51.934 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1715884084 )) 00:06:51.934 00:06:51.934 real 0m2.632s 00:06:51.934 user 0m0.989s 00:06:51.934 sys 0m0.380s 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 ************************************ 00:06:51.934 END TEST dd_flag_noatime_forced_aio 00:06:51.934 ************************************ 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 ************************************ 00:06:51.934 START TEST dd_flags_misc_forced_aio 00:06:51.934 ************************************ 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:51.934 18:28:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:51.934 [2024-05-16 18:28:05.264405] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:51.934 [2024-05-16 18:28:05.264512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63407 ] 00:06:51.934 [2024-05-16 18:28:05.403961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.194 [2024-05-16 18:28:05.551534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.194 [2024-05-16 18:28:05.624541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.763  Copying: 512/512 [B] (average 500 kBps) 00:06:52.763 00:06:52.763 18:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p439fhdoqcf2wu0uw89ps471pdc3y7s1agna23cqcf53j30v33llmw61g9vbxcjz5qqmfs7yjx7ee9e7kn3eom4xgeh2uwj8ep5fi3417q1oasw1yexqxjfahbxgcvfid49azh8wj23xfu0lqt864s7e9ft6cxtbssfkw3uocq2xwmn5ert1wjb6o0lwxdxowp19h3qnjm2kd2eaga75jxfd1czeboxvqva9smmtxkvl84lhy4imqkw7vgz9soxd2xiehzkz1ecf22pn3jxwct4wtjakjw8ud0utkyua13ctizrdn1h676rf4aqya5ygjv1sa92mk2kjmf9bas0qi1ksnhlyae6o15fi4t60sb3zy4sjdx70uuwigjcu3ztxa3567z0zoqj300t23mufcmw0rl76buz4wfmwo1gemc7qkc7uw8uk1in6uxh5euf1zev3lbc4ucgw7jwcwb6pj1l37mv2rvrp7ygq09ysqrxojl91mn8blh54olqllcpc == \p\4\3\9\f\h\d\o\q\c\f\2\w\u\0\u\w\8\9\p\s\4\7\1\p\d\c\3\y\7\s\1\a\g\n\a\2\3\c\q\c\f\5\3\j\3\0\v\3\3\l\l\m\w\6\1\g\9\v\b\x\c\j\z\5\q\q\m\f\s\7\y\j\x\7\e\e\9\e\7\k\n\3\e\o\m\4\x\g\e\h\2\u\w\j\8\e\p\5\f\i\3\4\1\7\q\1\o\a\s\w\1\y\e\x\q\x\j\f\a\h\b\x\g\c\v\f\i\d\4\9\a\z\h\8\w\j\2\3\x\f\u\0\l\q\t\8\6\4\s\7\e\9\f\t\6\c\x\t\b\s\s\f\k\w\3\u\o\c\q\2\x\w\m\n\5\e\r\t\1\w\j\b\6\o\0\l\w\x\d\x\o\w\p\1\9\h\3\q\n\j\m\2\k\d\2\e\a\g\a\7\5\j\x\f\d\1\c\z\e\b\o\x\v\q\v\a\9\s\m\m\t\x\k\v\l\8\4\l\h\y\4\i\m\q\k\w\7\v\g\z\9\s\o\x\d\2\x\i\e\h\z\k\z\1\e\c\f\2\2\p\n\3\j\x\w\c\t\4\w\t\j\a\k\j\w\8\u\d\0\u\t\k\y\u\a\1\3\c\t\i\z\r\d\n\1\h\6\7\6\r\f\4\a\q\y\a\5\y\g\j\v\1\s\a\9\2\m\k\2\k\j\m\f\9\b\a\s\0\q\i\1\k\s\n\h\l\y\a\e\6\o\1\5\f\i\4\t\6\0\s\b\3\z\y\4\s\j\d\x\7\0\u\u\w\i\g\j\c\u\3\z\t\x\a\3\5\6\7\z\0\z\o\q\j\3\0\0\t\2\3\m\u\f\c\m\w\0\r\l\7\6\b\u\z\4\w\f\m\w\o\1\g\e\m\c\7\q\k\c\7\u\w\8\u\k\1\i\n\6\u\x\h\5\e\u\f\1\z\e\v\3\l\b\c\4\u\c\g\w\7\j\w\c\w\b\6\p\j\1\l\3\7\m\v\2\r\v\r\p\7\y\g\q\0\9\y\s\q\r\x\o\j\l\9\1\m\n\8\b\l\h\5\4\o\l\q\l\l\c\p\c ]] 00:06:52.763 18:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:52.763 18:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:52.763 [2024-05-16 18:28:06.084134] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:52.763 [2024-05-16 18:28:06.084270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63410 ] 00:06:52.763 [2024-05-16 18:28:06.221409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.041 [2024-05-16 18:28:06.368358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.041 [2024-05-16 18:28:06.441553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.609  Copying: 512/512 [B] (average 500 kBps) 00:06:53.609 00:06:53.610 18:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p439fhdoqcf2wu0uw89ps471pdc3y7s1agna23cqcf53j30v33llmw61g9vbxcjz5qqmfs7yjx7ee9e7kn3eom4xgeh2uwj8ep5fi3417q1oasw1yexqxjfahbxgcvfid49azh8wj23xfu0lqt864s7e9ft6cxtbssfkw3uocq2xwmn5ert1wjb6o0lwxdxowp19h3qnjm2kd2eaga75jxfd1czeboxvqva9smmtxkvl84lhy4imqkw7vgz9soxd2xiehzkz1ecf22pn3jxwct4wtjakjw8ud0utkyua13ctizrdn1h676rf4aqya5ygjv1sa92mk2kjmf9bas0qi1ksnhlyae6o15fi4t60sb3zy4sjdx70uuwigjcu3ztxa3567z0zoqj300t23mufcmw0rl76buz4wfmwo1gemc7qkc7uw8uk1in6uxh5euf1zev3lbc4ucgw7jwcwb6pj1l37mv2rvrp7ygq09ysqrxojl91mn8blh54olqllcpc == \p\4\3\9\f\h\d\o\q\c\f\2\w\u\0\u\w\8\9\p\s\4\7\1\p\d\c\3\y\7\s\1\a\g\n\a\2\3\c\q\c\f\5\3\j\3\0\v\3\3\l\l\m\w\6\1\g\9\v\b\x\c\j\z\5\q\q\m\f\s\7\y\j\x\7\e\e\9\e\7\k\n\3\e\o\m\4\x\g\e\h\2\u\w\j\8\e\p\5\f\i\3\4\1\7\q\1\o\a\s\w\1\y\e\x\q\x\j\f\a\h\b\x\g\c\v\f\i\d\4\9\a\z\h\8\w\j\2\3\x\f\u\0\l\q\t\8\6\4\s\7\e\9\f\t\6\c\x\t\b\s\s\f\k\w\3\u\o\c\q\2\x\w\m\n\5\e\r\t\1\w\j\b\6\o\0\l\w\x\d\x\o\w\p\1\9\h\3\q\n\j\m\2\k\d\2\e\a\g\a\7\5\j\x\f\d\1\c\z\e\b\o\x\v\q\v\a\9\s\m\m\t\x\k\v\l\8\4\l\h\y\4\i\m\q\k\w\7\v\g\z\9\s\o\x\d\2\x\i\e\h\z\k\z\1\e\c\f\2\2\p\n\3\j\x\w\c\t\4\w\t\j\a\k\j\w\8\u\d\0\u\t\k\y\u\a\1\3\c\t\i\z\r\d\n\1\h\6\7\6\r\f\4\a\q\y\a\5\y\g\j\v\1\s\a\9\2\m\k\2\k\j\m\f\9\b\a\s\0\q\i\1\k\s\n\h\l\y\a\e\6\o\1\5\f\i\4\t\6\0\s\b\3\z\y\4\s\j\d\x\7\0\u\u\w\i\g\j\c\u\3\z\t\x\a\3\5\6\7\z\0\z\o\q\j\3\0\0\t\2\3\m\u\f\c\m\w\0\r\l\7\6\b\u\z\4\w\f\m\w\o\1\g\e\m\c\7\q\k\c\7\u\w\8\u\k\1\i\n\6\u\x\h\5\e\u\f\1\z\e\v\3\l\b\c\4\u\c\g\w\7\j\w\c\w\b\6\p\j\1\l\3\7\m\v\2\r\v\r\p\7\y\g\q\0\9\y\s\q\r\x\o\j\l\9\1\m\n\8\b\l\h\5\4\o\l\q\l\l\c\p\c ]] 00:06:53.610 18:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:53.610 18:28:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:53.610 [2024-05-16 18:28:06.895092] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:53.610 [2024-05-16 18:28:06.895204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63423 ] 00:06:53.610 [2024-05-16 18:28:07.030686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.868 [2024-05-16 18:28:07.177570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.868 [2024-05-16 18:28:07.251260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.163  Copying: 512/512 [B] (average 166 kBps) 00:06:54.163 00:06:54.163 18:28:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p439fhdoqcf2wu0uw89ps471pdc3y7s1agna23cqcf53j30v33llmw61g9vbxcjz5qqmfs7yjx7ee9e7kn3eom4xgeh2uwj8ep5fi3417q1oasw1yexqxjfahbxgcvfid49azh8wj23xfu0lqt864s7e9ft6cxtbssfkw3uocq2xwmn5ert1wjb6o0lwxdxowp19h3qnjm2kd2eaga75jxfd1czeboxvqva9smmtxkvl84lhy4imqkw7vgz9soxd2xiehzkz1ecf22pn3jxwct4wtjakjw8ud0utkyua13ctizrdn1h676rf4aqya5ygjv1sa92mk2kjmf9bas0qi1ksnhlyae6o15fi4t60sb3zy4sjdx70uuwigjcu3ztxa3567z0zoqj300t23mufcmw0rl76buz4wfmwo1gemc7qkc7uw8uk1in6uxh5euf1zev3lbc4ucgw7jwcwb6pj1l37mv2rvrp7ygq09ysqrxojl91mn8blh54olqllcpc == \p\4\3\9\f\h\d\o\q\c\f\2\w\u\0\u\w\8\9\p\s\4\7\1\p\d\c\3\y\7\s\1\a\g\n\a\2\3\c\q\c\f\5\3\j\3\0\v\3\3\l\l\m\w\6\1\g\9\v\b\x\c\j\z\5\q\q\m\f\s\7\y\j\x\7\e\e\9\e\7\k\n\3\e\o\m\4\x\g\e\h\2\u\w\j\8\e\p\5\f\i\3\4\1\7\q\1\o\a\s\w\1\y\e\x\q\x\j\f\a\h\b\x\g\c\v\f\i\d\4\9\a\z\h\8\w\j\2\3\x\f\u\0\l\q\t\8\6\4\s\7\e\9\f\t\6\c\x\t\b\s\s\f\k\w\3\u\o\c\q\2\x\w\m\n\5\e\r\t\1\w\j\b\6\o\0\l\w\x\d\x\o\w\p\1\9\h\3\q\n\j\m\2\k\d\2\e\a\g\a\7\5\j\x\f\d\1\c\z\e\b\o\x\v\q\v\a\9\s\m\m\t\x\k\v\l\8\4\l\h\y\4\i\m\q\k\w\7\v\g\z\9\s\o\x\d\2\x\i\e\h\z\k\z\1\e\c\f\2\2\p\n\3\j\x\w\c\t\4\w\t\j\a\k\j\w\8\u\d\0\u\t\k\y\u\a\1\3\c\t\i\z\r\d\n\1\h\6\7\6\r\f\4\a\q\y\a\5\y\g\j\v\1\s\a\9\2\m\k\2\k\j\m\f\9\b\a\s\0\q\i\1\k\s\n\h\l\y\a\e\6\o\1\5\f\i\4\t\6\0\s\b\3\z\y\4\s\j\d\x\7\0\u\u\w\i\g\j\c\u\3\z\t\x\a\3\5\6\7\z\0\z\o\q\j\3\0\0\t\2\3\m\u\f\c\m\w\0\r\l\7\6\b\u\z\4\w\f\m\w\o\1\g\e\m\c\7\q\k\c\7\u\w\8\u\k\1\i\n\6\u\x\h\5\e\u\f\1\z\e\v\3\l\b\c\4\u\c\g\w\7\j\w\c\w\b\6\p\j\1\l\3\7\m\v\2\r\v\r\p\7\y\g\q\0\9\y\s\q\r\x\o\j\l\9\1\m\n\8\b\l\h\5\4\o\l\q\l\l\c\p\c ]] 00:06:54.163 18:28:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.163 18:28:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:54.421 [2024-05-16 18:28:07.674702] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:54.421 [2024-05-16 18:28:07.674838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63436 ] 00:06:54.421 [2024-05-16 18:28:07.810650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.680 [2024-05-16 18:28:07.957620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.680 [2024-05-16 18:28:08.031862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.938  Copying: 512/512 [B] (average 166 kBps) 00:06:54.938 00:06:54.938 18:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ p439fhdoqcf2wu0uw89ps471pdc3y7s1agna23cqcf53j30v33llmw61g9vbxcjz5qqmfs7yjx7ee9e7kn3eom4xgeh2uwj8ep5fi3417q1oasw1yexqxjfahbxgcvfid49azh8wj23xfu0lqt864s7e9ft6cxtbssfkw3uocq2xwmn5ert1wjb6o0lwxdxowp19h3qnjm2kd2eaga75jxfd1czeboxvqva9smmtxkvl84lhy4imqkw7vgz9soxd2xiehzkz1ecf22pn3jxwct4wtjakjw8ud0utkyua13ctizrdn1h676rf4aqya5ygjv1sa92mk2kjmf9bas0qi1ksnhlyae6o15fi4t60sb3zy4sjdx70uuwigjcu3ztxa3567z0zoqj300t23mufcmw0rl76buz4wfmwo1gemc7qkc7uw8uk1in6uxh5euf1zev3lbc4ucgw7jwcwb6pj1l37mv2rvrp7ygq09ysqrxojl91mn8blh54olqllcpc == \p\4\3\9\f\h\d\o\q\c\f\2\w\u\0\u\w\8\9\p\s\4\7\1\p\d\c\3\y\7\s\1\a\g\n\a\2\3\c\q\c\f\5\3\j\3\0\v\3\3\l\l\m\w\6\1\g\9\v\b\x\c\j\z\5\q\q\m\f\s\7\y\j\x\7\e\e\9\e\7\k\n\3\e\o\m\4\x\g\e\h\2\u\w\j\8\e\p\5\f\i\3\4\1\7\q\1\o\a\s\w\1\y\e\x\q\x\j\f\a\h\b\x\g\c\v\f\i\d\4\9\a\z\h\8\w\j\2\3\x\f\u\0\l\q\t\8\6\4\s\7\e\9\f\t\6\c\x\t\b\s\s\f\k\w\3\u\o\c\q\2\x\w\m\n\5\e\r\t\1\w\j\b\6\o\0\l\w\x\d\x\o\w\p\1\9\h\3\q\n\j\m\2\k\d\2\e\a\g\a\7\5\j\x\f\d\1\c\z\e\b\o\x\v\q\v\a\9\s\m\m\t\x\k\v\l\8\4\l\h\y\4\i\m\q\k\w\7\v\g\z\9\s\o\x\d\2\x\i\e\h\z\k\z\1\e\c\f\2\2\p\n\3\j\x\w\c\t\4\w\t\j\a\k\j\w\8\u\d\0\u\t\k\y\u\a\1\3\c\t\i\z\r\d\n\1\h\6\7\6\r\f\4\a\q\y\a\5\y\g\j\v\1\s\a\9\2\m\k\2\k\j\m\f\9\b\a\s\0\q\i\1\k\s\n\h\l\y\a\e\6\o\1\5\f\i\4\t\6\0\s\b\3\z\y\4\s\j\d\x\7\0\u\u\w\i\g\j\c\u\3\z\t\x\a\3\5\6\7\z\0\z\o\q\j\3\0\0\t\2\3\m\u\f\c\m\w\0\r\l\7\6\b\u\z\4\w\f\m\w\o\1\g\e\m\c\7\q\k\c\7\u\w\8\u\k\1\i\n\6\u\x\h\5\e\u\f\1\z\e\v\3\l\b\c\4\u\c\g\w\7\j\w\c\w\b\6\p\j\1\l\3\7\m\v\2\r\v\r\p\7\y\g\q\0\9\y\s\q\r\x\o\j\l\9\1\m\n\8\b\l\h\5\4\o\l\q\l\l\c\p\c ]] 00:06:54.938 18:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:54.938 18:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:54.938 18:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:54.938 18:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:54.938 18:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:54.938 18:28:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:55.196 [2024-05-16 18:28:08.474772] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:55.196 [2024-05-16 18:28:08.474911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63449 ] 00:06:55.196 [2024-05-16 18:28:08.614271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.454 [2024-05-16 18:28:08.762370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.454 [2024-05-16 18:28:08.835146] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.022  Copying: 512/512 [B] (average 500 kBps) 00:06:56.022 00:06:56.022 18:28:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wtqdllgat3mtiojzhxrhek7jh1vdqjoq3rtsmiim5en0asarbyxtsborutx9cicvanyst7oznw2b7jeakx16snb53gqds952tozaie9m4nks8091fvsduozx4tst4it552frp2rinu589x2mvm7v5bpred2e17o6ab3zkgnbrp1mcudvzj8wa8y24ccxlq9weyyzp22sezjk43arbosfskh3bqi38tv5gbemyxiy2ihhxjx3bmywuubgzhm1qwzm4lmrj78lvqhsf3xmwvzgooumzvu2mwtxnenj1yaxw2mhwqf67l16phyfzgz318gylz9p5c5e9jxlsleug541bui8nwsfxynixwlyhf43niialyajmuqagyfgoe5z60874piqgbi6v7fy9e033vvp7l81huoxbd93vfv14w1nkx6b7z4o2nxkigo5rekg65saomhs4vla4cn2dkxckeiilexvnxaq1atnbetvv43835ppaok6cmfmdivx5h1fhz1l == \w\t\q\d\l\l\g\a\t\3\m\t\i\o\j\z\h\x\r\h\e\k\7\j\h\1\v\d\q\j\o\q\3\r\t\s\m\i\i\m\5\e\n\0\a\s\a\r\b\y\x\t\s\b\o\r\u\t\x\9\c\i\c\v\a\n\y\s\t\7\o\z\n\w\2\b\7\j\e\a\k\x\1\6\s\n\b\5\3\g\q\d\s\9\5\2\t\o\z\a\i\e\9\m\4\n\k\s\8\0\9\1\f\v\s\d\u\o\z\x\4\t\s\t\4\i\t\5\5\2\f\r\p\2\r\i\n\u\5\8\9\x\2\m\v\m\7\v\5\b\p\r\e\d\2\e\1\7\o\6\a\b\3\z\k\g\n\b\r\p\1\m\c\u\d\v\z\j\8\w\a\8\y\2\4\c\c\x\l\q\9\w\e\y\y\z\p\2\2\s\e\z\j\k\4\3\a\r\b\o\s\f\s\k\h\3\b\q\i\3\8\t\v\5\g\b\e\m\y\x\i\y\2\i\h\h\x\j\x\3\b\m\y\w\u\u\b\g\z\h\m\1\q\w\z\m\4\l\m\r\j\7\8\l\v\q\h\s\f\3\x\m\w\v\z\g\o\o\u\m\z\v\u\2\m\w\t\x\n\e\n\j\1\y\a\x\w\2\m\h\w\q\f\6\7\l\1\6\p\h\y\f\z\g\z\3\1\8\g\y\l\z\9\p\5\c\5\e\9\j\x\l\s\l\e\u\g\5\4\1\b\u\i\8\n\w\s\f\x\y\n\i\x\w\l\y\h\f\4\3\n\i\i\a\l\y\a\j\m\u\q\a\g\y\f\g\o\e\5\z\6\0\8\7\4\p\i\q\g\b\i\6\v\7\f\y\9\e\0\3\3\v\v\p\7\l\8\1\h\u\o\x\b\d\9\3\v\f\v\1\4\w\1\n\k\x\6\b\7\z\4\o\2\n\x\k\i\g\o\5\r\e\k\g\6\5\s\a\o\m\h\s\4\v\l\a\4\c\n\2\d\k\x\c\k\e\i\i\l\e\x\v\n\x\a\q\1\a\t\n\b\e\t\v\v\4\3\8\3\5\p\p\a\o\k\6\c\m\f\m\d\i\v\x\5\h\1\f\h\z\1\l ]] 00:06:56.022 18:28:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.022 18:28:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:56.022 [2024-05-16 18:28:09.286964] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:56.022 [2024-05-16 18:28:09.287104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63451 ] 00:06:56.022 [2024-05-16 18:28:09.428902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.280 [2024-05-16 18:28:09.576993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.280 [2024-05-16 18:28:09.649706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:56.539  Copying: 512/512 [B] (average 500 kBps) 00:06:56.539 00:06:56.539 18:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wtqdllgat3mtiojzhxrhek7jh1vdqjoq3rtsmiim5en0asarbyxtsborutx9cicvanyst7oznw2b7jeakx16snb53gqds952tozaie9m4nks8091fvsduozx4tst4it552frp2rinu589x2mvm7v5bpred2e17o6ab3zkgnbrp1mcudvzj8wa8y24ccxlq9weyyzp22sezjk43arbosfskh3bqi38tv5gbemyxiy2ihhxjx3bmywuubgzhm1qwzm4lmrj78lvqhsf3xmwvzgooumzvu2mwtxnenj1yaxw2mhwqf67l16phyfzgz318gylz9p5c5e9jxlsleug541bui8nwsfxynixwlyhf43niialyajmuqagyfgoe5z60874piqgbi6v7fy9e033vvp7l81huoxbd93vfv14w1nkx6b7z4o2nxkigo5rekg65saomhs4vla4cn2dkxckeiilexvnxaq1atnbetvv43835ppaok6cmfmdivx5h1fhz1l == \w\t\q\d\l\l\g\a\t\3\m\t\i\o\j\z\h\x\r\h\e\k\7\j\h\1\v\d\q\j\o\q\3\r\t\s\m\i\i\m\5\e\n\0\a\s\a\r\b\y\x\t\s\b\o\r\u\t\x\9\c\i\c\v\a\n\y\s\t\7\o\z\n\w\2\b\7\j\e\a\k\x\1\6\s\n\b\5\3\g\q\d\s\9\5\2\t\o\z\a\i\e\9\m\4\n\k\s\8\0\9\1\f\v\s\d\u\o\z\x\4\t\s\t\4\i\t\5\5\2\f\r\p\2\r\i\n\u\5\8\9\x\2\m\v\m\7\v\5\b\p\r\e\d\2\e\1\7\o\6\a\b\3\z\k\g\n\b\r\p\1\m\c\u\d\v\z\j\8\w\a\8\y\2\4\c\c\x\l\q\9\w\e\y\y\z\p\2\2\s\e\z\j\k\4\3\a\r\b\o\s\f\s\k\h\3\b\q\i\3\8\t\v\5\g\b\e\m\y\x\i\y\2\i\h\h\x\j\x\3\b\m\y\w\u\u\b\g\z\h\m\1\q\w\z\m\4\l\m\r\j\7\8\l\v\q\h\s\f\3\x\m\w\v\z\g\o\o\u\m\z\v\u\2\m\w\t\x\n\e\n\j\1\y\a\x\w\2\m\h\w\q\f\6\7\l\1\6\p\h\y\f\z\g\z\3\1\8\g\y\l\z\9\p\5\c\5\e\9\j\x\l\s\l\e\u\g\5\4\1\b\u\i\8\n\w\s\f\x\y\n\i\x\w\l\y\h\f\4\3\n\i\i\a\l\y\a\j\m\u\q\a\g\y\f\g\o\e\5\z\6\0\8\7\4\p\i\q\g\b\i\6\v\7\f\y\9\e\0\3\3\v\v\p\7\l\8\1\h\u\o\x\b\d\9\3\v\f\v\1\4\w\1\n\k\x\6\b\7\z\4\o\2\n\x\k\i\g\o\5\r\e\k\g\6\5\s\a\o\m\h\s\4\v\l\a\4\c\n\2\d\k\x\c\k\e\i\i\l\e\x\v\n\x\a\q\1\a\t\n\b\e\t\v\v\4\3\8\3\5\p\p\a\o\k\6\c\m\f\m\d\i\v\x\5\h\1\f\h\z\1\l ]] 00:06:56.539 18:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:56.539 18:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:56.797 [2024-05-16 18:28:10.074623] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:56.797 [2024-05-16 18:28:10.074728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:06:56.797 [2024-05-16 18:28:10.210281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.056 [2024-05-16 18:28:10.363852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.056 [2024-05-16 18:28:10.438818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.316  Copying: 512/512 [B] (average 250 kBps) 00:06:57.316 00:06:57.316 18:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wtqdllgat3mtiojzhxrhek7jh1vdqjoq3rtsmiim5en0asarbyxtsborutx9cicvanyst7oznw2b7jeakx16snb53gqds952tozaie9m4nks8091fvsduozx4tst4it552frp2rinu589x2mvm7v5bpred2e17o6ab3zkgnbrp1mcudvzj8wa8y24ccxlq9weyyzp22sezjk43arbosfskh3bqi38tv5gbemyxiy2ihhxjx3bmywuubgzhm1qwzm4lmrj78lvqhsf3xmwvzgooumzvu2mwtxnenj1yaxw2mhwqf67l16phyfzgz318gylz9p5c5e9jxlsleug541bui8nwsfxynixwlyhf43niialyajmuqagyfgoe5z60874piqgbi6v7fy9e033vvp7l81huoxbd93vfv14w1nkx6b7z4o2nxkigo5rekg65saomhs4vla4cn2dkxckeiilexvnxaq1atnbetvv43835ppaok6cmfmdivx5h1fhz1l == \w\t\q\d\l\l\g\a\t\3\m\t\i\o\j\z\h\x\r\h\e\k\7\j\h\1\v\d\q\j\o\q\3\r\t\s\m\i\i\m\5\e\n\0\a\s\a\r\b\y\x\t\s\b\o\r\u\t\x\9\c\i\c\v\a\n\y\s\t\7\o\z\n\w\2\b\7\j\e\a\k\x\1\6\s\n\b\5\3\g\q\d\s\9\5\2\t\o\z\a\i\e\9\m\4\n\k\s\8\0\9\1\f\v\s\d\u\o\z\x\4\t\s\t\4\i\t\5\5\2\f\r\p\2\r\i\n\u\5\8\9\x\2\m\v\m\7\v\5\b\p\r\e\d\2\e\1\7\o\6\a\b\3\z\k\g\n\b\r\p\1\m\c\u\d\v\z\j\8\w\a\8\y\2\4\c\c\x\l\q\9\w\e\y\y\z\p\2\2\s\e\z\j\k\4\3\a\r\b\o\s\f\s\k\h\3\b\q\i\3\8\t\v\5\g\b\e\m\y\x\i\y\2\i\h\h\x\j\x\3\b\m\y\w\u\u\b\g\z\h\m\1\q\w\z\m\4\l\m\r\j\7\8\l\v\q\h\s\f\3\x\m\w\v\z\g\o\o\u\m\z\v\u\2\m\w\t\x\n\e\n\j\1\y\a\x\w\2\m\h\w\q\f\6\7\l\1\6\p\h\y\f\z\g\z\3\1\8\g\y\l\z\9\p\5\c\5\e\9\j\x\l\s\l\e\u\g\5\4\1\b\u\i\8\n\w\s\f\x\y\n\i\x\w\l\y\h\f\4\3\n\i\i\a\l\y\a\j\m\u\q\a\g\y\f\g\o\e\5\z\6\0\8\7\4\p\i\q\g\b\i\6\v\7\f\y\9\e\0\3\3\v\v\p\7\l\8\1\h\u\o\x\b\d\9\3\v\f\v\1\4\w\1\n\k\x\6\b\7\z\4\o\2\n\x\k\i\g\o\5\r\e\k\g\6\5\s\a\o\m\h\s\4\v\l\a\4\c\n\2\d\k\x\c\k\e\i\i\l\e\x\v\n\x\a\q\1\a\t\n\b\e\t\v\v\4\3\8\3\5\p\p\a\o\k\6\c\m\f\m\d\i\v\x\5\h\1\f\h\z\1\l ]] 00:06:57.316 18:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:57.316 18:28:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:57.575 [2024-05-16 18:28:10.857032] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:57.575 [2024-05-16 18:28:10.857153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63477 ] 00:06:57.575 [2024-05-16 18:28:10.994907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.834 [2024-05-16 18:28:11.146606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.834 [2024-05-16 18:28:11.221170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.093  Copying: 512/512 [B] (average 250 kBps) 00:06:58.093 00:06:58.352 18:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ wtqdllgat3mtiojzhxrhek7jh1vdqjoq3rtsmiim5en0asarbyxtsborutx9cicvanyst7oznw2b7jeakx16snb53gqds952tozaie9m4nks8091fvsduozx4tst4it552frp2rinu589x2mvm7v5bpred2e17o6ab3zkgnbrp1mcudvzj8wa8y24ccxlq9weyyzp22sezjk43arbosfskh3bqi38tv5gbemyxiy2ihhxjx3bmywuubgzhm1qwzm4lmrj78lvqhsf3xmwvzgooumzvu2mwtxnenj1yaxw2mhwqf67l16phyfzgz318gylz9p5c5e9jxlsleug541bui8nwsfxynixwlyhf43niialyajmuqagyfgoe5z60874piqgbi6v7fy9e033vvp7l81huoxbd93vfv14w1nkx6b7z4o2nxkigo5rekg65saomhs4vla4cn2dkxckeiilexvnxaq1atnbetvv43835ppaok6cmfmdivx5h1fhz1l == \w\t\q\d\l\l\g\a\t\3\m\t\i\o\j\z\h\x\r\h\e\k\7\j\h\1\v\d\q\j\o\q\3\r\t\s\m\i\i\m\5\e\n\0\a\s\a\r\b\y\x\t\s\b\o\r\u\t\x\9\c\i\c\v\a\n\y\s\t\7\o\z\n\w\2\b\7\j\e\a\k\x\1\6\s\n\b\5\3\g\q\d\s\9\5\2\t\o\z\a\i\e\9\m\4\n\k\s\8\0\9\1\f\v\s\d\u\o\z\x\4\t\s\t\4\i\t\5\5\2\f\r\p\2\r\i\n\u\5\8\9\x\2\m\v\m\7\v\5\b\p\r\e\d\2\e\1\7\o\6\a\b\3\z\k\g\n\b\r\p\1\m\c\u\d\v\z\j\8\w\a\8\y\2\4\c\c\x\l\q\9\w\e\y\y\z\p\2\2\s\e\z\j\k\4\3\a\r\b\o\s\f\s\k\h\3\b\q\i\3\8\t\v\5\g\b\e\m\y\x\i\y\2\i\h\h\x\j\x\3\b\m\y\w\u\u\b\g\z\h\m\1\q\w\z\m\4\l\m\r\j\7\8\l\v\q\h\s\f\3\x\m\w\v\z\g\o\o\u\m\z\v\u\2\m\w\t\x\n\e\n\j\1\y\a\x\w\2\m\h\w\q\f\6\7\l\1\6\p\h\y\f\z\g\z\3\1\8\g\y\l\z\9\p\5\c\5\e\9\j\x\l\s\l\e\u\g\5\4\1\b\u\i\8\n\w\s\f\x\y\n\i\x\w\l\y\h\f\4\3\n\i\i\a\l\y\a\j\m\u\q\a\g\y\f\g\o\e\5\z\6\0\8\7\4\p\i\q\g\b\i\6\v\7\f\y\9\e\0\3\3\v\v\p\7\l\8\1\h\u\o\x\b\d\9\3\v\f\v\1\4\w\1\n\k\x\6\b\7\z\4\o\2\n\x\k\i\g\o\5\r\e\k\g\6\5\s\a\o\m\h\s\4\v\l\a\4\c\n\2\d\k\x\c\k\e\i\i\l\e\x\v\n\x\a\q\1\a\t\n\b\e\t\v\v\4\3\8\3\5\p\p\a\o\k\6\c\m\f\m\d\i\v\x\5\h\1\f\h\z\1\l ]] 00:06:58.352 00:06:58.352 real 0m6.401s 00:06:58.352 user 0m3.908s 00:06:58.352 sys 0m1.495s 00:06:58.353 18:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.353 18:28:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:58.353 ************************************ 00:06:58.353 END TEST dd_flags_misc_forced_aio 00:06:58.353 ************************************ 00:06:58.353 18:28:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:58.353 18:28:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:58.353 18:28:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:58.353 00:06:58.353 real 0m26.332s 00:06:58.353 user 0m14.626s 00:06:58.353 sys 0m7.865s 00:06:58.353 18:28:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:58.353 ************************************ 00:06:58.353 END TEST spdk_dd_posix 00:06:58.353 18:28:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:58.353 ************************************ 00:06:58.353 18:28:11 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:58.353 18:28:11 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.353 18:28:11 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.353 18:28:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:58.353 ************************************ 00:06:58.353 START TEST spdk_dd_malloc 00:06:58.353 ************************************ 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:58.353 * Looking for test storage... 00:06:58.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:58.353 ************************************ 00:06:58.353 START TEST dd_malloc_copy 00:06:58.353 ************************************ 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:58.353 18:28:11 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.353 [2024-05-16 18:28:11.839147] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:06:58.353 [2024-05-16 18:28:11.839257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63551 ] 00:06:58.353 { 00:06:58.353 "subsystems": [ 00:06:58.353 { 00:06:58.353 "subsystem": "bdev", 00:06:58.353 "config": [ 00:06:58.353 { 00:06:58.353 "params": { 00:06:58.353 "block_size": 512, 00:06:58.353 "num_blocks": 1048576, 00:06:58.353 "name": "malloc0" 00:06:58.353 }, 00:06:58.353 "method": "bdev_malloc_create" 00:06:58.353 }, 00:06:58.353 { 00:06:58.353 "params": { 00:06:58.353 "block_size": 512, 00:06:58.353 "num_blocks": 1048576, 00:06:58.353 "name": "malloc1" 00:06:58.353 }, 00:06:58.353 "method": "bdev_malloc_create" 00:06:58.353 }, 00:06:58.353 { 00:06:58.353 "method": "bdev_wait_for_examine" 00:06:58.353 } 00:06:58.353 ] 00:06:58.353 } 00:06:58.353 ] 00:06:58.353 } 00:06:58.612 [2024-05-16 18:28:11.972599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.870 [2024-05-16 18:28:12.121114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.870 [2024-05-16 18:28:12.196456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.710  Copying: 200/512 [MB] (200 MBps) Copying: 401/512 [MB] (200 MBps) Copying: 512/512 [MB] (average 200 MBps) 00:07:02.710 00:07:02.710 18:28:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:02.710 18:28:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:02.710 18:28:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:02.710 18:28:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:02.710 [2024-05-16 18:28:15.953056] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:02.710 [2024-05-16 18:28:15.953161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63605 ] 00:07:02.710 { 00:07:02.710 "subsystems": [ 00:07:02.710 { 00:07:02.710 "subsystem": "bdev", 00:07:02.710 "config": [ 00:07:02.710 { 00:07:02.710 "params": { 00:07:02.710 "block_size": 512, 00:07:02.710 "num_blocks": 1048576, 00:07:02.710 "name": "malloc0" 00:07:02.710 }, 00:07:02.710 "method": "bdev_malloc_create" 00:07:02.710 }, 00:07:02.710 { 00:07:02.710 "params": { 00:07:02.710 "block_size": 512, 00:07:02.710 "num_blocks": 1048576, 00:07:02.710 "name": "malloc1" 00:07:02.710 }, 00:07:02.710 "method": "bdev_malloc_create" 00:07:02.710 }, 00:07:02.710 { 00:07:02.710 "method": "bdev_wait_for_examine" 00:07:02.710 } 00:07:02.710 ] 00:07:02.710 } 00:07:02.710 ] 00:07:02.710 } 00:07:02.710 [2024-05-16 18:28:16.093420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.968 [2024-05-16 18:28:16.213274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.968 [2024-05-16 18:28:16.267037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.451  Copying: 196/512 [MB] (196 MBps) Copying: 394/512 [MB] (197 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:07:06.451 00:07:06.451 00:07:06.451 real 0m7.997s 00:07:06.451 user 0m6.882s 00:07:06.451 sys 0m0.953s 00:07:06.451 18:28:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.451 ************************************ 00:07:06.451 END TEST dd_malloc_copy 00:07:06.451 ************************************ 00:07:06.451 18:28:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:06.451 00:07:06.451 real 0m8.139s 00:07:06.451 user 0m6.929s 00:07:06.451 sys 0m1.048s 00:07:06.451 18:28:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.451 ************************************ 00:07:06.451 END TEST spdk_dd_malloc 00:07:06.451 ************************************ 00:07:06.451 18:28:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:06.451 18:28:19 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:06.451 18:28:19 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:06.451 18:28:19 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.451 18:28:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:06.451 ************************************ 00:07:06.451 START TEST spdk_dd_bdev_to_bdev 00:07:06.451 ************************************ 00:07:06.451 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:06.709 * Looking for test storage... 00:07:06.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.709 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:06.710 ************************************ 00:07:06.710 START TEST dd_inflate_file 00:07:06.710 ************************************ 00:07:06.710 18:28:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:06.710 [2024-05-16 18:28:20.047133] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:06.710 [2024-05-16 18:28:20.047236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63710 ] 00:07:06.710 [2024-05-16 18:28:20.186730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.967 [2024-05-16 18:28:20.320940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.967 [2024-05-16 18:28:20.380606] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.225  Copying: 64/64 [MB] (average 1185 MBps) 00:07:07.225 00:07:07.225 00:07:07.225 real 0m0.704s 00:07:07.225 user 0m0.443s 00:07:07.225 sys 0m0.342s 00:07:07.225 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:07.225 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:07.225 ************************************ 00:07:07.225 END TEST dd_inflate_file 00:07:07.225 ************************************ 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:07.483 ************************************ 00:07:07.483 START TEST dd_copy_to_out_bdev 00:07:07.483 ************************************ 00:07:07.483 18:28:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:07.483 { 00:07:07.483 "subsystems": [ 00:07:07.483 { 00:07:07.483 "subsystem": "bdev", 00:07:07.483 "config": [ 00:07:07.483 { 00:07:07.483 "params": { 00:07:07.483 "trtype": "pcie", 00:07:07.483 "traddr": "0000:00:10.0", 00:07:07.483 "name": "Nvme0" 00:07:07.483 }, 00:07:07.483 "method": "bdev_nvme_attach_controller" 00:07:07.483 }, 00:07:07.483 { 00:07:07.483 "params": { 00:07:07.483 "trtype": "pcie", 00:07:07.483 "traddr": "0000:00:11.0", 00:07:07.483 "name": "Nvme1" 00:07:07.483 }, 00:07:07.483 "method": "bdev_nvme_attach_controller" 00:07:07.483 }, 00:07:07.483 { 00:07:07.483 "method": "bdev_wait_for_examine" 00:07:07.483 } 00:07:07.483 ] 00:07:07.483 } 00:07:07.483 ] 00:07:07.483 } 00:07:07.483 [2024-05-16 18:28:20.820281] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:07.483 [2024-05-16 18:28:20.820432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63749 ] 00:07:07.483 [2024-05-16 18:28:20.969601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.741 [2024-05-16 18:28:21.104318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.741 [2024-05-16 18:28:21.161637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.375  Copying: 61/64 [MB] (61 MBps) Copying: 64/64 [MB] (average 61 MBps) 00:07:09.375 00:07:09.375 00:07:09.375 real 0m1.888s 00:07:09.375 user 0m1.629s 00:07:09.375 sys 0m1.426s 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:09.375 ************************************ 00:07:09.375 END TEST dd_copy_to_out_bdev 00:07:09.375 ************************************ 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:09.375 ************************************ 00:07:09.375 START TEST dd_offset_magic 00:07:09.375 ************************************ 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:09.375 18:28:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:09.375 [2024-05-16 18:28:22.755923] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:09.375 [2024-05-16 18:28:22.756041] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63794 ] 00:07:09.375 { 00:07:09.375 "subsystems": [ 00:07:09.375 { 00:07:09.375 "subsystem": "bdev", 00:07:09.375 "config": [ 00:07:09.375 { 00:07:09.375 "params": { 00:07:09.375 "trtype": "pcie", 00:07:09.375 "traddr": "0000:00:10.0", 00:07:09.375 "name": "Nvme0" 00:07:09.375 }, 00:07:09.375 "method": "bdev_nvme_attach_controller" 00:07:09.375 }, 00:07:09.375 { 00:07:09.375 "params": { 00:07:09.375 "trtype": "pcie", 00:07:09.375 "traddr": "0000:00:11.0", 00:07:09.375 "name": "Nvme1" 00:07:09.375 }, 00:07:09.375 "method": "bdev_nvme_attach_controller" 00:07:09.375 }, 00:07:09.375 { 00:07:09.375 "method": "bdev_wait_for_examine" 00:07:09.375 } 00:07:09.375 ] 00:07:09.375 } 00:07:09.375 ] 00:07:09.375 } 00:07:09.634 [2024-05-16 18:28:22.894182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.634 [2024-05-16 18:28:23.013951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.634 [2024-05-16 18:28:23.067717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.151  Copying: 65/65 [MB] (average 955 MBps) 00:07:10.151 00:07:10.151 18:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:10.151 18:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:10.151 18:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:10.151 18:28:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:10.151 [2024-05-16 18:28:23.618309] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:10.151 [2024-05-16 18:28:23.618475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63814 ] 00:07:10.151 { 00:07:10.151 "subsystems": [ 00:07:10.151 { 00:07:10.151 "subsystem": "bdev", 00:07:10.151 "config": [ 00:07:10.151 { 00:07:10.151 "params": { 00:07:10.151 "trtype": "pcie", 00:07:10.151 "traddr": "0000:00:10.0", 00:07:10.151 "name": "Nvme0" 00:07:10.151 }, 00:07:10.151 "method": "bdev_nvme_attach_controller" 00:07:10.151 }, 00:07:10.151 { 00:07:10.151 "params": { 00:07:10.151 "trtype": "pcie", 00:07:10.151 "traddr": "0000:00:11.0", 00:07:10.151 "name": "Nvme1" 00:07:10.151 }, 00:07:10.151 "method": "bdev_nvme_attach_controller" 00:07:10.151 }, 00:07:10.151 { 00:07:10.151 "method": "bdev_wait_for_examine" 00:07:10.151 } 00:07:10.151 ] 00:07:10.151 } 00:07:10.151 ] 00:07:10.151 } 00:07:10.413 [2024-05-16 18:28:23.761846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.413 [2024-05-16 18:28:23.884035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.670 [2024-05-16 18:28:23.942415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.927  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:10.928 00:07:10.928 18:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:10.928 18:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:10.928 18:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:10.928 18:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:10.928 18:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:10.928 18:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:10.928 18:28:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:10.928 [2024-05-16 18:28:24.384528] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:10.928 [2024-05-16 18:28:24.384616] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63825 ] 00:07:10.928 { 00:07:10.928 "subsystems": [ 00:07:10.928 { 00:07:10.928 "subsystem": "bdev", 00:07:10.928 "config": [ 00:07:10.928 { 00:07:10.928 "params": { 00:07:10.928 "trtype": "pcie", 00:07:10.928 "traddr": "0000:00:10.0", 00:07:10.928 "name": "Nvme0" 00:07:10.928 }, 00:07:10.928 "method": "bdev_nvme_attach_controller" 00:07:10.928 }, 00:07:10.928 { 00:07:10.928 "params": { 00:07:10.928 "trtype": "pcie", 00:07:10.928 "traddr": "0000:00:11.0", 00:07:10.928 "name": "Nvme1" 00:07:10.928 }, 00:07:10.928 "method": "bdev_nvme_attach_controller" 00:07:10.928 }, 00:07:10.928 { 00:07:10.928 "method": "bdev_wait_for_examine" 00:07:10.928 } 00:07:10.928 ] 00:07:10.928 } 00:07:10.928 ] 00:07:10.928 } 00:07:11.185 [2024-05-16 18:28:24.528368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.185 [2024-05-16 18:28:24.643588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.443 [2024-05-16 18:28:24.698404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.701  Copying: 65/65 [MB] (average 1015 MBps) 00:07:11.701 00:07:11.701 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:11.701 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:11.701 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:11.701 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:11.960 [2024-05-16 18:28:25.243312] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:11.960 [2024-05-16 18:28:25.243438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63845 ] 00:07:11.960 { 00:07:11.960 "subsystems": [ 00:07:11.960 { 00:07:11.960 "subsystem": "bdev", 00:07:11.960 "config": [ 00:07:11.960 { 00:07:11.960 "params": { 00:07:11.960 "trtype": "pcie", 00:07:11.960 "traddr": "0000:00:10.0", 00:07:11.960 "name": "Nvme0" 00:07:11.960 }, 00:07:11.960 "method": "bdev_nvme_attach_controller" 00:07:11.960 }, 00:07:11.960 { 00:07:11.960 "params": { 00:07:11.960 "trtype": "pcie", 00:07:11.960 "traddr": "0000:00:11.0", 00:07:11.960 "name": "Nvme1" 00:07:11.960 }, 00:07:11.960 "method": "bdev_nvme_attach_controller" 00:07:11.960 }, 00:07:11.960 { 00:07:11.960 "method": "bdev_wait_for_examine" 00:07:11.960 } 00:07:11.960 ] 00:07:11.960 } 00:07:11.960 ] 00:07:11.960 } 00:07:11.960 [2024-05-16 18:28:25.383423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.219 [2024-05-16 18:28:25.492384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.219 [2024-05-16 18:28:25.546087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.477  Copying: 1024/1024 [kB] (average 333 MBps) 00:07:12.477 00:07:12.477 ************************************ 00:07:12.477 END TEST dd_offset_magic 00:07:12.477 ************************************ 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:12.478 00:07:12.478 real 0m3.227s 00:07:12.478 user 0m2.367s 00:07:12.478 sys 0m0.930s 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:12.478 18:28:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:12.736 { 00:07:12.736 "subsystems": [ 00:07:12.736 { 00:07:12.736 "subsystem": "bdev", 00:07:12.736 "config": [ 00:07:12.736 { 00:07:12.736 "params": { 00:07:12.736 "trtype": "pcie", 00:07:12.736 "traddr": "0000:00:10.0", 00:07:12.736 "name": "Nvme0" 00:07:12.736 }, 00:07:12.736 "method": "bdev_nvme_attach_controller" 00:07:12.736 }, 00:07:12.736 { 00:07:12.736 "params": { 00:07:12.736 "trtype": "pcie", 00:07:12.736 "traddr": "0000:00:11.0", 00:07:12.736 "name": "Nvme1" 00:07:12.736 }, 00:07:12.736 "method": "bdev_nvme_attach_controller" 00:07:12.736 }, 00:07:12.736 { 00:07:12.736 "method": "bdev_wait_for_examine" 00:07:12.736 } 00:07:12.736 ] 00:07:12.736 } 00:07:12.736 ] 00:07:12.736 } 00:07:12.736 [2024-05-16 18:28:26.038708] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:12.736 [2024-05-16 18:28:26.038949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63882 ] 00:07:12.736 [2024-05-16 18:28:26.181252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.995 [2024-05-16 18:28:26.328163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.995 [2024-05-16 18:28:26.380863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.512  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:13.512 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:13.512 18:28:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:13.512 [2024-05-16 18:28:26.829162] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:13.512 [2024-05-16 18:28:26.829279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63903 ] 00:07:13.512 { 00:07:13.512 "subsystems": [ 00:07:13.512 { 00:07:13.512 "subsystem": "bdev", 00:07:13.512 "config": [ 00:07:13.512 { 00:07:13.512 "params": { 00:07:13.512 "trtype": "pcie", 00:07:13.512 "traddr": "0000:00:10.0", 00:07:13.512 "name": "Nvme0" 00:07:13.512 }, 00:07:13.512 "method": "bdev_nvme_attach_controller" 00:07:13.512 }, 00:07:13.512 { 00:07:13.512 "params": { 00:07:13.512 "trtype": "pcie", 00:07:13.512 "traddr": "0000:00:11.0", 00:07:13.512 "name": "Nvme1" 00:07:13.512 }, 00:07:13.512 "method": "bdev_nvme_attach_controller" 00:07:13.512 }, 00:07:13.512 { 00:07:13.512 "method": "bdev_wait_for_examine" 00:07:13.512 } 00:07:13.512 ] 00:07:13.512 } 00:07:13.512 ] 00:07:13.512 } 00:07:13.512 [2024-05-16 18:28:26.968442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.771 [2024-05-16 18:28:27.085980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.771 [2024-05-16 18:28:27.139731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.031  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:14.031 00:07:14.289 18:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:14.289 00:07:14.289 real 0m7.668s 00:07:14.289 user 0m5.668s 00:07:14.289 sys 0m3.412s 00:07:14.289 18:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.289 ************************************ 00:07:14.289 END TEST spdk_dd_bdev_to_bdev 00:07:14.289 ************************************ 00:07:14.289 18:28:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:14.289 18:28:27 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:14.289 18:28:27 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:14.289 18:28:27 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.289 18:28:27 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.289 18:28:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:14.290 ************************************ 00:07:14.290 START TEST spdk_dd_uring 00:07:14.290 ************************************ 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:14.290 * Looking for test storage... 00:07:14.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:14.290 ************************************ 00:07:14.290 START TEST dd_uring_copy 00:07:14.290 ************************************ 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1121 -- # uring_zram_copy 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=wmj4kadg4t85pxrzyajfjfix7w11nbburcfxhydvb6zd6pzlowev8g83t7rca8c0cqep5vjj4itjg7hngvtyib83gxjxnrvshgbscbfnnb9wrmcltm453anprepl3cj4vcuov8ld7bpbxmq2y9ux9s609ec3p5aurz4jempozmdiebstxtimgp4e9x4jrp476cjvzq9d5mkdhnek7ozqql5fs8g8zgb16sxljquevmwmzyj1i2md814jqmmyi8a0p6eklx6kot5t1gpnogshdf2ipa2p5xz4tv1z46ckknpk8pbetmulixgave4hykha094glg3q479z5kksg6lej0q8qzw1xuyjy7qgitxppnzd2w6hfuwur6wkf97bt76pdq4sz8418zg5hu1r572fnoasdmwu8mm86q4k50k60o2vy5km3w5sy5crg6quuomzqk5czrzmdo2i84s3w1jzeuvmotufnvxnuez6ejedjq1x2j4f2h0dbd3ho72sdeq5e82nkgz9r19g0qnx37uzecr35s2hotwf6oo6ypnpag8ugver0nock1o9fw7lkd06zc34doxw75sghrjfwv2ci1apggl4o3a5jx4df5bwk9nfx5nnamfi4ehg08akj282ibjewyzqtj7tl802j7yxiyy4o45zctrd46yoswu8qg0x8fk5f73r6urp6l1mwqcamith2a37cr5pzp2zufjvhz9aecwd2b8i9jt4qx8b5jkzm99fjr5yp1t65firaqoyyl95a8wofv99kdkhmet6mvdv0y8fvzv4hdqf7jilq3mzb6b256ag07n51zskja2jtnwl7du31wuh3mwy281c1rq2o3r3cwyws4inc3int5c2s8e15mv4hr7z1u94rx93hc6l5rxaovd9lsb9o0xiodypphp0jm7bejfm2zqspe8ymmj9ticomib1cnky2bttplms13fddm2hwbo1xfh7u89b01wzjop4qot28fnltamv00ecu09aumbxn15ym1cp 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo wmj4kadg4t85pxrzyajfjfix7w11nbburcfxhydvb6zd6pzlowev8g83t7rca8c0cqep5vjj4itjg7hngvtyib83gxjxnrvshgbscbfnnb9wrmcltm453anprepl3cj4vcuov8ld7bpbxmq2y9ux9s609ec3p5aurz4jempozmdiebstxtimgp4e9x4jrp476cjvzq9d5mkdhnek7ozqql5fs8g8zgb16sxljquevmwmzyj1i2md814jqmmyi8a0p6eklx6kot5t1gpnogshdf2ipa2p5xz4tv1z46ckknpk8pbetmulixgave4hykha094glg3q479z5kksg6lej0q8qzw1xuyjy7qgitxppnzd2w6hfuwur6wkf97bt76pdq4sz8418zg5hu1r572fnoasdmwu8mm86q4k50k60o2vy5km3w5sy5crg6quuomzqk5czrzmdo2i84s3w1jzeuvmotufnvxnuez6ejedjq1x2j4f2h0dbd3ho72sdeq5e82nkgz9r19g0qnx37uzecr35s2hotwf6oo6ypnpag8ugver0nock1o9fw7lkd06zc34doxw75sghrjfwv2ci1apggl4o3a5jx4df5bwk9nfx5nnamfi4ehg08akj282ibjewyzqtj7tl802j7yxiyy4o45zctrd46yoswu8qg0x8fk5f73r6urp6l1mwqcamith2a37cr5pzp2zufjvhz9aecwd2b8i9jt4qx8b5jkzm99fjr5yp1t65firaqoyyl95a8wofv99kdkhmet6mvdv0y8fvzv4hdqf7jilq3mzb6b256ag07n51zskja2jtnwl7du31wuh3mwy281c1rq2o3r3cwyws4inc3int5c2s8e15mv4hr7z1u94rx93hc6l5rxaovd9lsb9o0xiodypphp0jm7bejfm2zqspe8ymmj9ticomib1cnky2bttplms13fddm2hwbo1xfh7u89b01wzjop4qot28fnltamv00ecu09aumbxn15ym1cp 00:07:14.290 18:28:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:14.549 [2024-05-16 18:28:27.804805] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:14.549 [2024-05-16 18:28:27.804983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63973 ] 00:07:14.549 [2024-05-16 18:28:27.943804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.808 [2024-05-16 18:28:28.090780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.808 [2024-05-16 18:28:28.143139] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:15.635  Copying: 511/511 [MB] (average 1347 MBps) 00:07:15.635 00:07:15.894 18:28:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:15.894 18:28:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:15.894 18:28:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:15.894 18:28:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.894 [2024-05-16 18:28:29.191622] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:15.894 [2024-05-16 18:28:29.191728] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63989 ] 00:07:15.894 { 00:07:15.894 "subsystems": [ 00:07:15.894 { 00:07:15.894 "subsystem": "bdev", 00:07:15.894 "config": [ 00:07:15.894 { 00:07:15.894 "params": { 00:07:15.894 "block_size": 512, 00:07:15.894 "num_blocks": 1048576, 00:07:15.894 "name": "malloc0" 00:07:15.894 }, 00:07:15.894 "method": "bdev_malloc_create" 00:07:15.894 }, 00:07:15.894 { 00:07:15.894 "params": { 00:07:15.894 "filename": "/dev/zram1", 00:07:15.894 "name": "uring0" 00:07:15.894 }, 00:07:15.894 "method": "bdev_uring_create" 00:07:15.894 }, 00:07:15.894 { 00:07:15.894 "method": "bdev_wait_for_examine" 00:07:15.894 } 00:07:15.894 ] 00:07:15.894 } 00:07:15.894 ] 00:07:15.894 } 00:07:15.894 [2024-05-16 18:28:29.332641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.153 [2024-05-16 18:28:29.484238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.153 [2024-05-16 18:28:29.541369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.021  Copying: 225/512 [MB] (225 MBps) Copying: 455/512 [MB] (229 MBps) Copying: 512/512 [MB] (average 227 MBps) 00:07:19.021 00:07:19.021 18:28:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:19.021 18:28:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:19.021 18:28:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:19.021 18:28:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.021 [2024-05-16 18:28:32.467874] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:19.021 [2024-05-16 18:28:32.467983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64033 ] 00:07:19.021 { 00:07:19.021 "subsystems": [ 00:07:19.021 { 00:07:19.021 "subsystem": "bdev", 00:07:19.021 "config": [ 00:07:19.021 { 00:07:19.021 "params": { 00:07:19.021 "block_size": 512, 00:07:19.021 "num_blocks": 1048576, 00:07:19.021 "name": "malloc0" 00:07:19.021 }, 00:07:19.021 "method": "bdev_malloc_create" 00:07:19.021 }, 00:07:19.021 { 00:07:19.021 "params": { 00:07:19.021 "filename": "/dev/zram1", 00:07:19.021 "name": "uring0" 00:07:19.021 }, 00:07:19.021 "method": "bdev_uring_create" 00:07:19.021 }, 00:07:19.021 { 00:07:19.021 "method": "bdev_wait_for_examine" 00:07:19.021 } 00:07:19.021 ] 00:07:19.021 } 00:07:19.021 ] 00:07:19.021 } 00:07:19.301 [2024-05-16 18:28:32.605759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.301 [2024-05-16 18:28:32.722570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.301 [2024-05-16 18:28:32.776331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.159  Copying: 162/512 [MB] (162 MBps) Copying: 334/512 [MB] (172 MBps) Copying: 506/512 [MB] (171 MBps) Copying: 512/512 [MB] (average 168 MBps) 00:07:23.159 00:07:23.159 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:23.159 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ wmj4kadg4t85pxrzyajfjfix7w11nbburcfxhydvb6zd6pzlowev8g83t7rca8c0cqep5vjj4itjg7hngvtyib83gxjxnrvshgbscbfnnb9wrmcltm453anprepl3cj4vcuov8ld7bpbxmq2y9ux9s609ec3p5aurz4jempozmdiebstxtimgp4e9x4jrp476cjvzq9d5mkdhnek7ozqql5fs8g8zgb16sxljquevmwmzyj1i2md814jqmmyi8a0p6eklx6kot5t1gpnogshdf2ipa2p5xz4tv1z46ckknpk8pbetmulixgave4hykha094glg3q479z5kksg6lej0q8qzw1xuyjy7qgitxppnzd2w6hfuwur6wkf97bt76pdq4sz8418zg5hu1r572fnoasdmwu8mm86q4k50k60o2vy5km3w5sy5crg6quuomzqk5czrzmdo2i84s3w1jzeuvmotufnvxnuez6ejedjq1x2j4f2h0dbd3ho72sdeq5e82nkgz9r19g0qnx37uzecr35s2hotwf6oo6ypnpag8ugver0nock1o9fw7lkd06zc34doxw75sghrjfwv2ci1apggl4o3a5jx4df5bwk9nfx5nnamfi4ehg08akj282ibjewyzqtj7tl802j7yxiyy4o45zctrd46yoswu8qg0x8fk5f73r6urp6l1mwqcamith2a37cr5pzp2zufjvhz9aecwd2b8i9jt4qx8b5jkzm99fjr5yp1t65firaqoyyl95a8wofv99kdkhmet6mvdv0y8fvzv4hdqf7jilq3mzb6b256ag07n51zskja2jtnwl7du31wuh3mwy281c1rq2o3r3cwyws4inc3int5c2s8e15mv4hr7z1u94rx93hc6l5rxaovd9lsb9o0xiodypphp0jm7bejfm2zqspe8ymmj9ticomib1cnky2bttplms13fddm2hwbo1xfh7u89b01wzjop4qot28fnltamv00ecu09aumbxn15ym1cp == \w\m\j\4\k\a\d\g\4\t\8\5\p\x\r\z\y\a\j\f\j\f\i\x\7\w\1\1\n\b\b\u\r\c\f\x\h\y\d\v\b\6\z\d\6\p\z\l\o\w\e\v\8\g\8\3\t\7\r\c\a\8\c\0\c\q\e\p\5\v\j\j\4\i\t\j\g\7\h\n\g\v\t\y\i\b\8\3\g\x\j\x\n\r\v\s\h\g\b\s\c\b\f\n\n\b\9\w\r\m\c\l\t\m\4\5\3\a\n\p\r\e\p\l\3\c\j\4\v\c\u\o\v\8\l\d\7\b\p\b\x\m\q\2\y\9\u\x\9\s\6\0\9\e\c\3\p\5\a\u\r\z\4\j\e\m\p\o\z\m\d\i\e\b\s\t\x\t\i\m\g\p\4\e\9\x\4\j\r\p\4\7\6\c\j\v\z\q\9\d\5\m\k\d\h\n\e\k\7\o\z\q\q\l\5\f\s\8\g\8\z\g\b\1\6\s\x\l\j\q\u\e\v\m\w\m\z\y\j\1\i\2\m\d\8\1\4\j\q\m\m\y\i\8\a\0\p\6\e\k\l\x\6\k\o\t\5\t\1\g\p\n\o\g\s\h\d\f\2\i\p\a\2\p\5\x\z\4\t\v\1\z\4\6\c\k\k\n\p\k\8\p\b\e\t\m\u\l\i\x\g\a\v\e\4\h\y\k\h\a\0\9\4\g\l\g\3\q\4\7\9\z\5\k\k\s\g\6\l\e\j\0\q\8\q\z\w\1\x\u\y\j\y\7\q\g\i\t\x\p\p\n\z\d\2\w\6\h\f\u\w\u\r\6\w\k\f\9\7\b\t\7\6\p\d\q\4\s\z\8\4\1\8\z\g\5\h\u\1\r\5\7\2\f\n\o\a\s\d\m\w\u\8\m\m\8\6\q\4\k\5\0\k\6\0\o\2\v\y\5\k\m\3\w\5\s\y\5\c\r\g\6\q\u\u\o\m\z\q\k\5\c\z\r\z\m\d\o\2\i\8\4\s\3\w\1\j\z\e\u\v\m\o\t\u\f\n\v\x\n\u\e\z\6\e\j\e\d\j\q\1\x\2\j\4\f\2\h\0\d\b\d\3\h\o\7\2\s\d\e\q\5\e\8\2\n\k\g\z\9\r\1\9\g\0\q\n\x\3\7\u\z\e\c\r\3\5\s\2\h\o\t\w\f\6\o\o\6\y\p\n\p\a\g\8\u\g\v\e\r\0\n\o\c\k\1\o\9\f\w\7\l\k\d\0\6\z\c\3\4\d\o\x\w\7\5\s\g\h\r\j\f\w\v\2\c\i\1\a\p\g\g\l\4\o\3\a\5\j\x\4\d\f\5\b\w\k\9\n\f\x\5\n\n\a\m\f\i\4\e\h\g\0\8\a\k\j\2\8\2\i\b\j\e\w\y\z\q\t\j\7\t\l\8\0\2\j\7\y\x\i\y\y\4\o\4\5\z\c\t\r\d\4\6\y\o\s\w\u\8\q\g\0\x\8\f\k\5\f\7\3\r\6\u\r\p\6\l\1\m\w\q\c\a\m\i\t\h\2\a\3\7\c\r\5\p\z\p\2\z\u\f\j\v\h\z\9\a\e\c\w\d\2\b\8\i\9\j\t\4\q\x\8\b\5\j\k\z\m\9\9\f\j\r\5\y\p\1\t\6\5\f\i\r\a\q\o\y\y\l\9\5\a\8\w\o\f\v\9\9\k\d\k\h\m\e\t\6\m\v\d\v\0\y\8\f\v\z\v\4\h\d\q\f\7\j\i\l\q\3\m\z\b\6\b\2\5\6\a\g\0\7\n\5\1\z\s\k\j\a\2\j\t\n\w\l\7\d\u\3\1\w\u\h\3\m\w\y\2\8\1\c\1\r\q\2\o\3\r\3\c\w\y\w\s\4\i\n\c\3\i\n\t\5\c\2\s\8\e\1\5\m\v\4\h\r\7\z\1\u\9\4\r\x\9\3\h\c\6\l\5\r\x\a\o\v\d\9\l\s\b\9\o\0\x\i\o\d\y\p\p\h\p\0\j\m\7\b\e\j\f\m\2\z\q\s\p\e\8\y\m\m\j\9\t\i\c\o\m\i\b\1\c\n\k\y\2\b\t\t\p\l\m\s\1\3\f\d\d\m\2\h\w\b\o\1\x\f\h\7\u\8\9\b\0\1\w\z\j\o\p\4\q\o\t\2\8\f\n\l\t\a\m\v\0\0\e\c\u\0\9\a\u\m\b\x\n\1\5\y\m\1\c\p ]] 00:07:23.159 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:23.159 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ wmj4kadg4t85pxrzyajfjfix7w11nbburcfxhydvb6zd6pzlowev8g83t7rca8c0cqep5vjj4itjg7hngvtyib83gxjxnrvshgbscbfnnb9wrmcltm453anprepl3cj4vcuov8ld7bpbxmq2y9ux9s609ec3p5aurz4jempozmdiebstxtimgp4e9x4jrp476cjvzq9d5mkdhnek7ozqql5fs8g8zgb16sxljquevmwmzyj1i2md814jqmmyi8a0p6eklx6kot5t1gpnogshdf2ipa2p5xz4tv1z46ckknpk8pbetmulixgave4hykha094glg3q479z5kksg6lej0q8qzw1xuyjy7qgitxppnzd2w6hfuwur6wkf97bt76pdq4sz8418zg5hu1r572fnoasdmwu8mm86q4k50k60o2vy5km3w5sy5crg6quuomzqk5czrzmdo2i84s3w1jzeuvmotufnvxnuez6ejedjq1x2j4f2h0dbd3ho72sdeq5e82nkgz9r19g0qnx37uzecr35s2hotwf6oo6ypnpag8ugver0nock1o9fw7lkd06zc34doxw75sghrjfwv2ci1apggl4o3a5jx4df5bwk9nfx5nnamfi4ehg08akj282ibjewyzqtj7tl802j7yxiyy4o45zctrd46yoswu8qg0x8fk5f73r6urp6l1mwqcamith2a37cr5pzp2zufjvhz9aecwd2b8i9jt4qx8b5jkzm99fjr5yp1t65firaqoyyl95a8wofv99kdkhmet6mvdv0y8fvzv4hdqf7jilq3mzb6b256ag07n51zskja2jtnwl7du31wuh3mwy281c1rq2o3r3cwyws4inc3int5c2s8e15mv4hr7z1u94rx93hc6l5rxaovd9lsb9o0xiodypphp0jm7bejfm2zqspe8ymmj9ticomib1cnky2bttplms13fddm2hwbo1xfh7u89b01wzjop4qot28fnltamv00ecu09aumbxn15ym1cp == \w\m\j\4\k\a\d\g\4\t\8\5\p\x\r\z\y\a\j\f\j\f\i\x\7\w\1\1\n\b\b\u\r\c\f\x\h\y\d\v\b\6\z\d\6\p\z\l\o\w\e\v\8\g\8\3\t\7\r\c\a\8\c\0\c\q\e\p\5\v\j\j\4\i\t\j\g\7\h\n\g\v\t\y\i\b\8\3\g\x\j\x\n\r\v\s\h\g\b\s\c\b\f\n\n\b\9\w\r\m\c\l\t\m\4\5\3\a\n\p\r\e\p\l\3\c\j\4\v\c\u\o\v\8\l\d\7\b\p\b\x\m\q\2\y\9\u\x\9\s\6\0\9\e\c\3\p\5\a\u\r\z\4\j\e\m\p\o\z\m\d\i\e\b\s\t\x\t\i\m\g\p\4\e\9\x\4\j\r\p\4\7\6\c\j\v\z\q\9\d\5\m\k\d\h\n\e\k\7\o\z\q\q\l\5\f\s\8\g\8\z\g\b\1\6\s\x\l\j\q\u\e\v\m\w\m\z\y\j\1\i\2\m\d\8\1\4\j\q\m\m\y\i\8\a\0\p\6\e\k\l\x\6\k\o\t\5\t\1\g\p\n\o\g\s\h\d\f\2\i\p\a\2\p\5\x\z\4\t\v\1\z\4\6\c\k\k\n\p\k\8\p\b\e\t\m\u\l\i\x\g\a\v\e\4\h\y\k\h\a\0\9\4\g\l\g\3\q\4\7\9\z\5\k\k\s\g\6\l\e\j\0\q\8\q\z\w\1\x\u\y\j\y\7\q\g\i\t\x\p\p\n\z\d\2\w\6\h\f\u\w\u\r\6\w\k\f\9\7\b\t\7\6\p\d\q\4\s\z\8\4\1\8\z\g\5\h\u\1\r\5\7\2\f\n\o\a\s\d\m\w\u\8\m\m\8\6\q\4\k\5\0\k\6\0\o\2\v\y\5\k\m\3\w\5\s\y\5\c\r\g\6\q\u\u\o\m\z\q\k\5\c\z\r\z\m\d\o\2\i\8\4\s\3\w\1\j\z\e\u\v\m\o\t\u\f\n\v\x\n\u\e\z\6\e\j\e\d\j\q\1\x\2\j\4\f\2\h\0\d\b\d\3\h\o\7\2\s\d\e\q\5\e\8\2\n\k\g\z\9\r\1\9\g\0\q\n\x\3\7\u\z\e\c\r\3\5\s\2\h\o\t\w\f\6\o\o\6\y\p\n\p\a\g\8\u\g\v\e\r\0\n\o\c\k\1\o\9\f\w\7\l\k\d\0\6\z\c\3\4\d\o\x\w\7\5\s\g\h\r\j\f\w\v\2\c\i\1\a\p\g\g\l\4\o\3\a\5\j\x\4\d\f\5\b\w\k\9\n\f\x\5\n\n\a\m\f\i\4\e\h\g\0\8\a\k\j\2\8\2\i\b\j\e\w\y\z\q\t\j\7\t\l\8\0\2\j\7\y\x\i\y\y\4\o\4\5\z\c\t\r\d\4\6\y\o\s\w\u\8\q\g\0\x\8\f\k\5\f\7\3\r\6\u\r\p\6\l\1\m\w\q\c\a\m\i\t\h\2\a\3\7\c\r\5\p\z\p\2\z\u\f\j\v\h\z\9\a\e\c\w\d\2\b\8\i\9\j\t\4\q\x\8\b\5\j\k\z\m\9\9\f\j\r\5\y\p\1\t\6\5\f\i\r\a\q\o\y\y\l\9\5\a\8\w\o\f\v\9\9\k\d\k\h\m\e\t\6\m\v\d\v\0\y\8\f\v\z\v\4\h\d\q\f\7\j\i\l\q\3\m\z\b\6\b\2\5\6\a\g\0\7\n\5\1\z\s\k\j\a\2\j\t\n\w\l\7\d\u\3\1\w\u\h\3\m\w\y\2\8\1\c\1\r\q\2\o\3\r\3\c\w\y\w\s\4\i\n\c\3\i\n\t\5\c\2\s\8\e\1\5\m\v\4\h\r\7\z\1\u\9\4\r\x\9\3\h\c\6\l\5\r\x\a\o\v\d\9\l\s\b\9\o\0\x\i\o\d\y\p\p\h\p\0\j\m\7\b\e\j\f\m\2\z\q\s\p\e\8\y\m\m\j\9\t\i\c\o\m\i\b\1\c\n\k\y\2\b\t\t\p\l\m\s\1\3\f\d\d\m\2\h\w\b\o\1\x\f\h\7\u\8\9\b\0\1\w\z\j\o\p\4\q\o\t\2\8\f\n\l\t\a\m\v\0\0\e\c\u\0\9\a\u\m\b\x\n\1\5\y\m\1\c\p ]] 00:07:23.159 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:23.419 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:23.419 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:23.419 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:23.419 18:28:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.419 [2024-05-16 18:28:36.819469] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:23.419 [2024-05-16 18:28:36.819587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64099 ] 00:07:23.419 { 00:07:23.419 "subsystems": [ 00:07:23.419 { 00:07:23.419 "subsystem": "bdev", 00:07:23.419 "config": [ 00:07:23.419 { 00:07:23.419 "params": { 00:07:23.419 "block_size": 512, 00:07:23.419 "num_blocks": 1048576, 00:07:23.419 "name": "malloc0" 00:07:23.419 }, 00:07:23.419 "method": "bdev_malloc_create" 00:07:23.419 }, 00:07:23.419 { 00:07:23.419 "params": { 00:07:23.419 "filename": "/dev/zram1", 00:07:23.419 "name": "uring0" 00:07:23.419 }, 00:07:23.419 "method": "bdev_uring_create" 00:07:23.419 }, 00:07:23.419 { 00:07:23.419 "method": "bdev_wait_for_examine" 00:07:23.419 } 00:07:23.419 ] 00:07:23.419 } 00:07:23.419 ] 00:07:23.419 } 00:07:23.677 [2024-05-16 18:28:36.963483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.677 [2024-05-16 18:28:37.091758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.677 [2024-05-16 18:28:37.150490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.777  Copying: 151/512 [MB] (151 MBps) Copying: 305/512 [MB] (154 MBps) Copying: 456/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 151 MBps) 00:07:27.777 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:27.777 18:28:41 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:27.777 [2024-05-16 18:28:41.198613] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:27.777 [2024-05-16 18:28:41.198733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64174 ] 00:07:27.777 { 00:07:27.777 "subsystems": [ 00:07:27.777 { 00:07:27.777 "subsystem": "bdev", 00:07:27.777 "config": [ 00:07:27.777 { 00:07:27.777 "params": { 00:07:27.777 "block_size": 512, 00:07:27.777 "num_blocks": 1048576, 00:07:27.777 "name": "malloc0" 00:07:27.777 }, 00:07:27.777 "method": "bdev_malloc_create" 00:07:27.777 }, 00:07:27.777 { 00:07:27.777 "params": { 00:07:27.778 "filename": "/dev/zram1", 00:07:27.778 "name": "uring0" 00:07:27.778 }, 00:07:27.778 "method": "bdev_uring_create" 00:07:27.778 }, 00:07:27.778 { 00:07:27.778 "params": { 00:07:27.778 "name": "uring0" 00:07:27.778 }, 00:07:27.778 "method": "bdev_uring_delete" 00:07:27.778 }, 00:07:27.778 { 00:07:27.778 "method": "bdev_wait_for_examine" 00:07:27.778 } 00:07:27.778 ] 00:07:27.778 } 00:07:27.778 ] 00:07:27.778 } 00:07:28.036 [2024-05-16 18:28:41.334149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.036 [2024-05-16 18:28:41.449027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.036 [2024-05-16 18:28:41.502494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.863  Copying: 0/0 [B] (average 0 Bps) 00:07:28.863 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.863 18:28:42 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:28.863 [2024-05-16 18:28:42.167471] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:28.863 [2024-05-16 18:28:42.167628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64203 ] 00:07:28.863 { 00:07:28.863 "subsystems": [ 00:07:28.863 { 00:07:28.863 "subsystem": "bdev", 00:07:28.863 "config": [ 00:07:28.863 { 00:07:28.863 "params": { 00:07:28.863 "block_size": 512, 00:07:28.863 "num_blocks": 1048576, 00:07:28.863 "name": "malloc0" 00:07:28.863 }, 00:07:28.863 "method": "bdev_malloc_create" 00:07:28.863 }, 00:07:28.863 { 00:07:28.863 "params": { 00:07:28.863 "filename": "/dev/zram1", 00:07:28.863 "name": "uring0" 00:07:28.863 }, 00:07:28.863 "method": "bdev_uring_create" 00:07:28.863 }, 00:07:28.863 { 00:07:28.863 "params": { 00:07:28.863 "name": "uring0" 00:07:28.863 }, 00:07:28.863 "method": "bdev_uring_delete" 00:07:28.863 }, 00:07:28.863 { 00:07:28.863 "method": "bdev_wait_for_examine" 00:07:28.863 } 00:07:28.863 ] 00:07:28.863 } 00:07:28.863 ] 00:07:28.863 } 00:07:28.863 [2024-05-16 18:28:42.306043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.122 [2024-05-16 18:28:42.421096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.122 [2024-05-16 18:28:42.478430] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.381 [2024-05-16 18:28:42.683507] bdev.c:8109:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:29.381 [2024-05-16 18:28:42.683561] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:29.381 [2024-05-16 18:28:42.683572] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:29.381 [2024-05-16 18:28:42.683583] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.666 [2024-05-16 18:28:43.004343] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:29.666 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:29.926 00:07:29.926 real 0m15.711s 00:07:29.926 user 0m10.568s 00:07:29.926 sys 0m12.920s 00:07:29.926 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.926 18:28:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:29.926 ************************************ 00:07:29.926 END TEST dd_uring_copy 00:07:29.926 ************************************ 00:07:30.186 00:07:30.186 real 0m15.855s 00:07:30.186 user 0m10.616s 00:07:30.186 sys 0m13.014s 00:07:30.186 18:28:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.186 18:28:43 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 ************************************ 00:07:30.186 END TEST spdk_dd_uring 00:07:30.186 ************************************ 00:07:30.186 18:28:43 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:30.186 18:28:43 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.186 18:28:43 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.186 18:28:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:30.186 ************************************ 00:07:30.186 START TEST spdk_dd_sparse 00:07:30.186 ************************************ 00:07:30.186 18:28:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:30.186 * Looking for test storage... 00:07:30.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:30.186 18:28:43 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.186 18:28:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.186 18:28:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.186 18:28:43 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.186 18:28:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.186 18:28:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:30.187 1+0 records in 00:07:30.187 1+0 records out 00:07:30.187 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00595451 s, 704 MB/s 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:30.187 1+0 records in 00:07:30.187 1+0 records out 00:07:30.187 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00676441 s, 620 MB/s 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:30.187 1+0 records in 00:07:30.187 1+0 records out 00:07:30.187 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00627333 s, 669 MB/s 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:30.187 ************************************ 00:07:30.187 START TEST dd_sparse_file_to_file 00:07:30.187 ************************************ 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:30.187 18:28:43 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:30.446 [2024-05-16 18:28:43.701124] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:30.446 [2024-05-16 18:28:43.701232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64289 ] 00:07:30.446 { 00:07:30.446 "subsystems": [ 00:07:30.446 { 00:07:30.446 "subsystem": "bdev", 00:07:30.446 "config": [ 00:07:30.446 { 00:07:30.446 "params": { 00:07:30.446 "block_size": 4096, 00:07:30.446 "filename": "dd_sparse_aio_disk", 00:07:30.446 "name": "dd_aio" 00:07:30.446 }, 00:07:30.446 "method": "bdev_aio_create" 00:07:30.446 }, 00:07:30.446 { 00:07:30.446 "params": { 00:07:30.446 "lvs_name": "dd_lvstore", 00:07:30.446 "bdev_name": "dd_aio" 00:07:30.446 }, 00:07:30.446 "method": "bdev_lvol_create_lvstore" 00:07:30.446 }, 00:07:30.446 { 00:07:30.446 "method": "bdev_wait_for_examine" 00:07:30.446 } 00:07:30.446 ] 00:07:30.446 } 00:07:30.446 ] 00:07:30.446 } 00:07:30.446 [2024-05-16 18:28:43.838967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.705 [2024-05-16 18:28:43.972653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.705 [2024-05-16 18:28:44.035778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.965  Copying: 12/36 [MB] (average 923 MBps) 00:07:30.965 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:30.965 00:07:30.965 real 0m0.765s 00:07:30.965 user 0m0.503s 00:07:30.965 sys 0m0.370s 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:30.965 ************************************ 00:07:30.965 END TEST dd_sparse_file_to_file 00:07:30.965 ************************************ 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:30.965 ************************************ 00:07:30.965 START TEST dd_sparse_file_to_bdev 00:07:30.965 ************************************ 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:30.965 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:31.224 18:28:44 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:31.224 [2024-05-16 18:28:44.515183] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:31.224 [2024-05-16 18:28:44.515291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64337 ] 00:07:31.224 { 00:07:31.224 "subsystems": [ 00:07:31.224 { 00:07:31.224 "subsystem": "bdev", 00:07:31.224 "config": [ 00:07:31.224 { 00:07:31.224 "params": { 00:07:31.224 "block_size": 4096, 00:07:31.224 "filename": "dd_sparse_aio_disk", 00:07:31.224 "name": "dd_aio" 00:07:31.224 }, 00:07:31.224 "method": "bdev_aio_create" 00:07:31.224 }, 00:07:31.224 { 00:07:31.224 "params": { 00:07:31.224 "lvs_name": "dd_lvstore", 00:07:31.224 "lvol_name": "dd_lvol", 00:07:31.224 "size_in_mib": 36, 00:07:31.224 "thin_provision": true 00:07:31.224 }, 00:07:31.224 "method": "bdev_lvol_create" 00:07:31.224 }, 00:07:31.224 { 00:07:31.224 "method": "bdev_wait_for_examine" 00:07:31.224 } 00:07:31.224 ] 00:07:31.224 } 00:07:31.224 ] 00:07:31.224 } 00:07:31.224 [2024-05-16 18:28:44.655034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.484 [2024-05-16 18:28:44.781842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.484 [2024-05-16 18:28:44.837194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.743  Copying: 12/36 [MB] (average 480 MBps) 00:07:31.743 00:07:31.743 00:07:31.743 real 0m0.713s 00:07:31.743 user 0m0.467s 00:07:31.743 sys 0m0.346s 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:31.743 ************************************ 00:07:31.743 END TEST dd_sparse_file_to_bdev 00:07:31.743 ************************************ 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:31.743 ************************************ 00:07:31.743 START TEST dd_sparse_bdev_to_file 00:07:31.743 ************************************ 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:31.743 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:32.003 [2024-05-16 18:28:45.278565] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:32.003 [2024-05-16 18:28:45.278678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64375 ] 00:07:32.003 { 00:07:32.003 "subsystems": [ 00:07:32.003 { 00:07:32.003 "subsystem": "bdev", 00:07:32.003 "config": [ 00:07:32.003 { 00:07:32.003 "params": { 00:07:32.003 "block_size": 4096, 00:07:32.003 "filename": "dd_sparse_aio_disk", 00:07:32.003 "name": "dd_aio" 00:07:32.003 }, 00:07:32.003 "method": "bdev_aio_create" 00:07:32.003 }, 00:07:32.003 { 00:07:32.003 "method": "bdev_wait_for_examine" 00:07:32.003 } 00:07:32.003 ] 00:07:32.003 } 00:07:32.003 ] 00:07:32.003 } 00:07:32.003 [2024-05-16 18:28:45.419444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.262 [2024-05-16 18:28:45.548740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.262 [2024-05-16 18:28:45.605793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.521  Copying: 12/36 [MB] (average 857 MBps) 00:07:32.521 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:32.521 00:07:32.521 real 0m0.728s 00:07:32.521 user 0m0.486s 00:07:32.521 sys 0m0.340s 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:32.521 ************************************ 00:07:32.521 END TEST dd_sparse_bdev_to_file 00:07:32.521 ************************************ 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:32.521 18:28:45 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:32.521 18:28:46 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:32.521 00:07:32.521 real 0m2.498s 00:07:32.521 user 0m1.549s 00:07:32.521 sys 0m1.248s 00:07:32.521 18:28:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.521 18:28:46 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:32.521 ************************************ 00:07:32.521 END TEST spdk_dd_sparse 00:07:32.521 ************************************ 00:07:32.781 18:28:46 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:32.781 18:28:46 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.781 18:28:46 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.781 18:28:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:32.781 ************************************ 00:07:32.781 START TEST spdk_dd_negative 00:07:32.781 ************************************ 00:07:32.781 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:32.781 * Looking for test storage... 00:07:32.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:32.781 18:28:46 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.781 18:28:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.781 18:28:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.781 18:28:46 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.781 18:28:46 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.781 18:28:46 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:32.782 ************************************ 00:07:32.782 START TEST dd_invalid_arguments 00:07:32.782 ************************************ 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:32.782 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:32.782 00:07:32.782 CPU options: 00:07:32.782 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:32.782 (like [0,1,10]) 00:07:32.782 --lcores lcore to CPU mapping list. The list is in the format: 00:07:32.782 [<,lcores[@CPUs]>...] 00:07:32.782 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:32.782 Within the group, '-' is used for range separator, 00:07:32.782 ',' is used for single number separator. 00:07:32.782 '( )' can be omitted for single element group, 00:07:32.782 '@' can be omitted if cpus and lcores have the same value 00:07:32.782 --disable-cpumask-locks Disable CPU core lock files. 00:07:32.782 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:32.782 pollers in the app support interrupt mode) 00:07:32.782 -p, --main-core main (primary) core for DPDK 00:07:32.782 00:07:32.782 Configuration options: 00:07:32.782 -c, --config, --json JSON config file 00:07:32.782 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:32.782 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:32.782 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:32.782 --rpcs-allowed comma-separated list of permitted RPCS 00:07:32.782 --json-ignore-init-errors don't exit on invalid config entry 00:07:32.782 00:07:32.782 Memory options: 00:07:32.782 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:32.782 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:32.782 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:32.782 -R, --huge-unlink unlink huge files after initialization 00:07:32.782 -n, --mem-channels number of memory channels used for DPDK 00:07:32.782 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:32.782 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:32.782 --no-huge run without using hugepages 00:07:32.782 -i, --shm-id shared memory ID (optional) 00:07:32.782 -g, --single-file-segments force creating just one hugetlbfs file 00:07:32.782 00:07:32.782 PCI options: 00:07:32.782 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:32.782 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:32.782 -u, --no-pci disable PCI access 00:07:32.782 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:32.782 00:07:32.782 Log options: 00:07:32.782 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:32.782 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:32.782 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:32.782 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:32.782 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:32.782 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:32.782 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:32.782 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:32.782 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:32.782 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:32.782 virtio_vfio_user, vmd) 00:07:32.782 --silence-noticelog disable notice level logging to stderr 00:07:32.782 00:07:32.782 Trace options: 00:07:32.782 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:32.782 setting 0 to disable trace (default 32768) 00:07:32.782 Tracepoints vary in size and can use more than one trace entry. 00:07:32.782 -e, --tpoint-group [:] 00:07:32.782 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:32.782 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:32.782 [2024-05-16 18:28:46.214010] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:32.782 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:32.782 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:32.782 a tracepoint group. First tpoint inside a group can be enabled by 00:07:32.782 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:32.782 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:32.782 in /include/spdk_internal/trace_defs.h 00:07:32.782 00:07:32.782 Other options: 00:07:32.782 -h, --help show this usage 00:07:32.782 -v, --version print SPDK version 00:07:32.782 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:32.782 --env-context Opaque context for use of the env implementation 00:07:32.782 00:07:32.782 Application specific: 00:07:32.782 [--------- DD Options ---------] 00:07:32.782 --if Input file. Must specify either --if or --ib. 00:07:32.782 --ib Input bdev. Must specifier either --if or --ib 00:07:32.782 --of Output file. Must specify either --of or --ob. 00:07:32.782 --ob Output bdev. Must specify either --of or --ob. 00:07:32.782 --iflag Input file flags. 00:07:32.782 --oflag Output file flags. 00:07:32.782 --bs I/O unit size (default: 4096) 00:07:32.782 --qd Queue depth (default: 2) 00:07:32.782 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:32.782 --skip Skip this many I/O units at start of input. (default: 0) 00:07:32.782 --seek Skip this many I/O units at start of output. (default: 0) 00:07:32.782 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:32.782 --sparse Enable hole skipping in input target 00:07:32.782 Available iflag and oflag values: 00:07:32.782 append - append mode 00:07:32.782 direct - use direct I/O for data 00:07:32.782 directory - fail unless a directory 00:07:32.782 dsync - use synchronized I/O for data 00:07:32.782 noatime - do not update access time 00:07:32.782 noctty - do not assign controlling terminal from file 00:07:32.782 nofollow - do not follow symlinks 00:07:32.782 nonblock - use non-blocking I/O 00:07:32.782 sync - use synchronized I/O for data and metadata 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:32.782 00:07:32.782 real 0m0.079s 00:07:32.782 user 0m0.051s 00:07:32.782 sys 0m0.028s 00:07:32.782 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.782 ************************************ 00:07:32.782 END TEST dd_invalid_arguments 00:07:32.783 18:28:46 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:32.783 ************************************ 00:07:32.783 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:32.783 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:32.783 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.783 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.042 ************************************ 00:07:33.042 START TEST dd_double_input 00:07:33.042 ************************************ 00:07:33.042 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:33.043 [2024-05-16 18:28:46.341036] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.043 00:07:33.043 real 0m0.077s 00:07:33.043 user 0m0.041s 00:07:33.043 sys 0m0.035s 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:33.043 ************************************ 00:07:33.043 END TEST dd_double_input 00:07:33.043 ************************************ 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.043 ************************************ 00:07:33.043 START TEST dd_double_output 00:07:33.043 ************************************ 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:33.043 [2024-05-16 18:28:46.465443] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.043 00:07:33.043 real 0m0.073s 00:07:33.043 user 0m0.048s 00:07:33.043 sys 0m0.024s 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:33.043 ************************************ 00:07:33.043 END TEST dd_double_output 00:07:33.043 ************************************ 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.043 ************************************ 00:07:33.043 START TEST dd_no_input 00:07:33.043 ************************************ 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.043 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:33.304 [2024-05-16 18:28:46.582309] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.304 00:07:33.304 real 0m0.072s 00:07:33.304 user 0m0.048s 00:07:33.304 sys 0m0.023s 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:33.304 ************************************ 00:07:33.304 END TEST dd_no_input 00:07:33.304 ************************************ 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.304 ************************************ 00:07:33.304 START TEST dd_no_output 00:07:33.304 ************************************ 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:33.304 [2024-05-16 18:28:46.700397] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.304 00:07:33.304 real 0m0.069s 00:07:33.304 user 0m0.044s 00:07:33.304 sys 0m0.025s 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:33.304 ************************************ 00:07:33.304 END TEST dd_no_output 00:07:33.304 ************************************ 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.304 ************************************ 00:07:33.304 START TEST dd_wrong_blocksize 00:07:33.304 ************************************ 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.304 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:33.563 [2024-05-16 18:28:46.822668] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.563 00:07:33.563 real 0m0.071s 00:07:33.563 user 0m0.040s 00:07:33.563 sys 0m0.030s 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.563 ************************************ 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:33.563 END TEST dd_wrong_blocksize 00:07:33.563 ************************************ 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:33.563 ************************************ 00:07:33.563 START TEST dd_smaller_blocksize 00:07:33.563 ************************************ 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.563 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.564 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.564 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.564 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:33.564 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:33.564 18:28:46 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:33.564 [2024-05-16 18:28:46.947762] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:33.564 [2024-05-16 18:28:46.947867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64588 ] 00:07:33.847 [2024-05-16 18:28:47.082389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.847 [2024-05-16 18:28:47.216731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.847 [2024-05-16 18:28:47.269616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:34.106 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:34.106 [2024-05-16 18:28:47.577162] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:34.106 [2024-05-16 18:28:47.577242] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.365 [2024-05-16 18:28:47.691085] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:34.365 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.366 00:07:34.366 real 0m0.906s 00:07:34.366 user 0m0.443s 00:07:34.366 sys 0m0.354s 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:34.366 ************************************ 00:07:34.366 END TEST dd_smaller_blocksize 00:07:34.366 ************************************ 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.366 ************************************ 00:07:34.366 START TEST dd_invalid_count 00:07:34.366 ************************************ 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.366 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:34.625 [2024-05-16 18:28:47.887605] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.625 00:07:34.625 real 0m0.061s 00:07:34.625 user 0m0.034s 00:07:34.625 sys 0m0.027s 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:34.625 ************************************ 00:07:34.625 END TEST dd_invalid_count 00:07:34.625 ************************************ 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.625 ************************************ 00:07:34.625 START TEST dd_invalid_oflag 00:07:34.625 ************************************ 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.625 18:28:47 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:34.625 [2024-05-16 18:28:48.008940] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.625 00:07:34.625 real 0m0.075s 00:07:34.625 user 0m0.041s 00:07:34.625 sys 0m0.033s 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:34.625 ************************************ 00:07:34.625 END TEST dd_invalid_oflag 00:07:34.625 ************************************ 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.625 ************************************ 00:07:34.625 START TEST dd_invalid_iflag 00:07:34.625 ************************************ 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.625 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:34.625 [2024-05-16 18:28:48.122117] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.884 00:07:34.884 real 0m0.063s 00:07:34.884 user 0m0.036s 00:07:34.884 sys 0m0.026s 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:34.884 ************************************ 00:07:34.884 END TEST dd_invalid_iflag 00:07:34.884 ************************************ 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:34.884 ************************************ 00:07:34.884 START TEST dd_unknown_flag 00:07:34.884 ************************************ 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.884 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:34.884 [2024-05-16 18:28:48.237802] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:34.884 [2024-05-16 18:28:48.237951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64691 ] 00:07:34.884 [2024-05-16 18:28:48.374415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.143 [2024-05-16 18:28:48.545960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.143 [2024-05-16 18:28:48.603266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:35.143 [2024-05-16 18:28:48.641234] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:35.143 [2024-05-16 18:28:48.641302] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.143 [2024-05-16 18:28:48.641371] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:35.143 [2024-05-16 18:28:48.641388] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.143 [2024-05-16 18:28:48.641666] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:35.143 [2024-05-16 18:28:48.641687] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.143 [2024-05-16 18:28:48.641744] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:35.143 [2024-05-16 18:28:48.641757] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:35.401 [2024-05-16 18:28:48.757706] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.401 00:07:35.401 real 0m0.674s 00:07:35.401 user 0m0.408s 00:07:35.401 sys 0m0.165s 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.401 ************************************ 00:07:35.401 END TEST dd_unknown_flag 00:07:35.401 ************************************ 00:07:35.401 18:28:48 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.660 ************************************ 00:07:35.660 START TEST dd_invalid_json 00:07:35.660 ************************************ 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:35.660 18:28:48 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:35.661 [2024-05-16 18:28:48.957814] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:35.661 [2024-05-16 18:28:48.957924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64714 ] 00:07:35.661 [2024-05-16 18:28:49.090757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.919 [2024-05-16 18:28:49.209951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.919 [2024-05-16 18:28:49.210050] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:35.919 [2024-05-16 18:28:49.210072] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.919 [2024-05-16 18:28:49.210085] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.919 [2024-05-16 18:28:49.210140] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.919 00:07:35.919 real 0m0.408s 00:07:35.919 user 0m0.231s 00:07:35.919 sys 0m0.074s 00:07:35.919 ************************************ 00:07:35.919 END TEST dd_invalid_json 00:07:35.919 ************************************ 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:35.919 ************************************ 00:07:35.919 END TEST spdk_dd_negative 00:07:35.919 ************************************ 00:07:35.919 00:07:35.919 real 0m3.292s 00:07:35.919 user 0m1.692s 00:07:35.919 sys 0m1.261s 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.919 18:28:49 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:35.919 ************************************ 00:07:35.919 END TEST spdk_dd 00:07:35.919 ************************************ 00:07:35.919 00:07:35.919 real 1m24.597s 00:07:35.919 user 0m55.960s 00:07:35.919 sys 0m35.497s 00:07:35.919 18:28:49 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.919 18:28:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:36.178 18:28:49 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:36.178 18:28:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:36.178 18:28:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:36.178 18:28:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.178 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:07:36.178 18:28:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:36.178 18:28:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:36.178 18:28:49 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:36.178 18:28:49 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:36.178 18:28:49 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:36.178 18:28:49 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:36.178 18:28:49 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:36.178 18:28:49 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:36.178 18:28:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.178 18:28:49 -- common/autotest_common.sh@10 -- # set +x 00:07:36.178 ************************************ 00:07:36.178 START TEST nvmf_tcp 00:07:36.178 ************************************ 00:07:36.178 18:28:49 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:36.178 * Looking for test storage... 00:07:36.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.178 18:28:49 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.178 18:28:49 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.178 18:28:49 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.178 18:28:49 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.178 18:28:49 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.178 18:28:49 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.178 18:28:49 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:36.178 18:28:49 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:36.178 18:28:49 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:36.178 18:28:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:36.178 18:28:49 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:36.178 18:28:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:36.178 18:28:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:36.178 18:28:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.178 ************************************ 00:07:36.178 START TEST nvmf_host_management 00:07:36.178 ************************************ 00:07:36.178 18:28:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:36.178 * Looking for test storage... 00:07:36.178 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:36.178 18:28:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:36.178 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:36.179 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:36.438 Cannot find device "nvmf_init_br" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:36.438 Cannot find device "nvmf_tgt_br" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:36.438 Cannot find device "nvmf_tgt_br2" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:36.438 Cannot find device "nvmf_init_br" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:36.438 Cannot find device "nvmf_tgt_br" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:36.438 Cannot find device "nvmf_tgt_br2" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:36.438 Cannot find device "nvmf_br" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:36.438 Cannot find device "nvmf_init_if" 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:36.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:36.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:36.438 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:36.696 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:36.696 18:28:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:36.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:07:36.696 00:07:36.696 --- 10.0.0.2 ping statistics --- 00:07:36.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.696 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:36.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:36.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:07:36.696 00:07:36.696 --- 10.0.0.3 ping statistics --- 00:07:36.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.696 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:36.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:07:36.696 00:07:36.696 --- 10.0.0.1 ping statistics --- 00:07:36.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.696 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=64961 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 64961 00:07:36.696 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 64961 ']' 00:07:36.697 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.697 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:36.697 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.697 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:36.697 18:28:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:36.697 [2024-05-16 18:28:50.166414] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:36.697 [2024-05-16 18:28:50.166574] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.955 [2024-05-16 18:28:50.310348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.215 [2024-05-16 18:28:50.465476] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.215 [2024-05-16 18:28:50.465817] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.215 [2024-05-16 18:28:50.466164] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.215 [2024-05-16 18:28:50.466499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.215 [2024-05-16 18:28:50.466672] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.215 [2024-05-16 18:28:50.467045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.215 [2024-05-16 18:28:50.467098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.215 [2024-05-16 18:28:50.467189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.215 [2024-05-16 18:28:50.467190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:37.215 [2024-05-16 18:28:50.569286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:37.782 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.782 [2024-05-16 18:28:51.255588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.041 Malloc0 00:07:38.041 [2024-05-16 18:28:51.354998] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:38.041 [2024-05-16 18:28:51.355419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65031 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65031 /var/tmp/bdevperf.sock 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 65031 ']' 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:38.041 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:38.041 { 00:07:38.042 "params": { 00:07:38.042 "name": "Nvme$subsystem", 00:07:38.042 "trtype": "$TEST_TRANSPORT", 00:07:38.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:38.042 "adrfam": "ipv4", 00:07:38.042 "trsvcid": "$NVMF_PORT", 00:07:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:38.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:38.042 "hdgst": ${hdgst:-false}, 00:07:38.042 "ddgst": ${ddgst:-false} 00:07:38.042 }, 00:07:38.042 "method": "bdev_nvme_attach_controller" 00:07:38.042 } 00:07:38.042 EOF 00:07:38.042 )") 00:07:38.042 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:38.042 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:38.042 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:38.042 18:28:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:38.042 "params": { 00:07:38.042 "name": "Nvme0", 00:07:38.042 "trtype": "tcp", 00:07:38.042 "traddr": "10.0.0.2", 00:07:38.042 "adrfam": "ipv4", 00:07:38.042 "trsvcid": "4420", 00:07:38.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:38.042 "hdgst": false, 00:07:38.042 "ddgst": false 00:07:38.042 }, 00:07:38.042 "method": "bdev_nvme_attach_controller" 00:07:38.042 }' 00:07:38.042 [2024-05-16 18:28:51.473060] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:38.042 [2024-05-16 18:28:51.473175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65031 ] 00:07:38.301 [2024-05-16 18:28:51.615779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.301 [2024-05-16 18:28:51.744201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.560 [2024-05-16 18:28:51.807094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:38.560 Running I/O for 10 seconds... 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:38.560 18:28:51 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.561 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:38.561 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:38.561 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:38.561 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:38.819 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:38.819 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:38.819 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:38.819 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:38.819 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:38.819 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:38.819 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.077 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.077 [2024-05-16 18:28:52.356929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.356981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.077 [2024-05-16 18:28:52.357482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.077 [2024-05-16 18:28:52.357493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.357986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.357997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.078 [2024-05-16 18:28:52.358413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:39.078 [2024-05-16 18:28:52.358422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.079 [2024-05-16 18:28:52.358437] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66c550 is same with the state(5) to be set 00:07:39.079 [2024-05-16 18:28:52.358510] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x66c550 was disconnected and freed. reset controller. 00:07:39.079 [2024-05-16 18:28:52.358636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.079 [2024-05-16 18:28:52.358653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.079 [2024-05-16 18:28:52.358665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.079 [2024-05-16 18:28:52.358675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.079 [2024-05-16 18:28:52.358685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.079 [2024-05-16 18:28:52.358694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.079 [2024-05-16 18:28:52.358704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:39.079 [2024-05-16 18:28:52.358713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:39.079 [2024-05-16 18:28:52.358722] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x66b510 is same with the state(5) to be set 00:07:39.079 [2024-05-16 18:28:52.359872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:39.079 task offset: 81664 on job bdev=Nvme0n1 fails 00:07:39.079 00:07:39.079 Latency(us) 00:07:39.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.079 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:39.079 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:39.079 Verification LBA range: start 0x0 length 0x400 00:07:39.079 Nvme0n1 : 0.44 1309.41 81.84 145.49 0.00 42536.60 2517.18 41466.41 00:07:39.079 =================================================================================================================== 00:07:39.079 Total : 1309.41 81.84 145.49 0.00 42536.60 2517.18 41466.41 00:07:39.079 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.079 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:39.079 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:39.079 [2024-05-16 18:28:52.361765] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:39.079 [2024-05-16 18:28:52.361789] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66b510 (9): Bad file descriptor 00:07:39.079 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:39.079 [2024-05-16 18:28:52.368022] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:39.079 18:28:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:39.079 18:28:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65031 00:07:40.011 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65031) - No such process 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:40.011 { 00:07:40.011 "params": { 00:07:40.011 "name": "Nvme$subsystem", 00:07:40.011 "trtype": "$TEST_TRANSPORT", 00:07:40.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:40.011 "adrfam": "ipv4", 00:07:40.011 "trsvcid": "$NVMF_PORT", 00:07:40.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:40.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:40.011 "hdgst": ${hdgst:-false}, 00:07:40.011 "ddgst": ${ddgst:-false} 00:07:40.011 }, 00:07:40.011 "method": "bdev_nvme_attach_controller" 00:07:40.011 } 00:07:40.011 EOF 00:07:40.011 )") 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:40.011 18:28:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:40.011 "params": { 00:07:40.011 "name": "Nvme0", 00:07:40.011 "trtype": "tcp", 00:07:40.011 "traddr": "10.0.0.2", 00:07:40.011 "adrfam": "ipv4", 00:07:40.011 "trsvcid": "4420", 00:07:40.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:40.011 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:40.011 "hdgst": false, 00:07:40.011 "ddgst": false 00:07:40.011 }, 00:07:40.011 "method": "bdev_nvme_attach_controller" 00:07:40.011 }' 00:07:40.011 [2024-05-16 18:28:53.426804] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:40.011 [2024-05-16 18:28:53.426913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65066 ] 00:07:40.269 [2024-05-16 18:28:53.568386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.269 [2024-05-16 18:28:53.696299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.269 [2024-05-16 18:28:53.762798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.526 Running I/O for 1 seconds... 00:07:41.460 00:07:41.460 Latency(us) 00:07:41.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.460 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:41.460 Verification LBA range: start 0x0 length 0x400 00:07:41.460 Nvme0n1 : 1.04 1416.68 88.54 0.00 0.00 44317.25 4855.62 41704.73 00:07:41.460 =================================================================================================================== 00:07:41.460 Total : 1416.68 88.54 0.00 0.00 44317.25 4855.62 41704.73 00:07:41.745 18:28:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:41.745 18:28:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:41.745 18:28:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:41.745 18:28:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:41.745 18:28:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:41.745 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:41.745 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.013 rmmod nvme_tcp 00:07:42.013 rmmod nvme_fabrics 00:07:42.013 rmmod nvme_keyring 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 64961 ']' 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 64961 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 64961 ']' 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 64961 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64961 00:07:42.013 killing process with pid 64961 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64961' 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 64961 00:07:42.013 [2024-05-16 18:28:55.308701] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:42.013 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 64961 00:07:42.272 [2024-05-16 18:28:55.547807] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:42.272 00:07:42.272 real 0m6.046s 00:07:42.272 user 0m22.957s 00:07:42.272 sys 0m1.530s 00:07:42.272 ************************************ 00:07:42.272 END TEST nvmf_host_management 00:07:42.272 ************************************ 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.272 18:28:55 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 18:28:55 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:42.272 18:28:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:42.272 18:28:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:42.272 18:28:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 ************************************ 00:07:42.272 START TEST nvmf_lvol 00:07:42.272 ************************************ 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:42.272 * Looking for test storage... 00:07:42.272 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.272 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:42.273 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:42.532 Cannot find device "nvmf_tgt_br" 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:42.532 Cannot find device "nvmf_tgt_br2" 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:42.532 Cannot find device "nvmf_tgt_br" 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:42.532 Cannot find device "nvmf_tgt_br2" 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:42.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:42.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:42.532 18:28:55 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:42.532 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:42.532 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:42.532 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:42.532 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:42.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:07:42.791 00:07:42.791 --- 10.0.0.2 ping statistics --- 00:07:42.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.791 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:42.791 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:42.791 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:07:42.791 00:07:42.791 --- 10.0.0.3 ping statistics --- 00:07:42.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.791 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:42.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:42.791 00:07:42.791 --- 10.0.0.1 ping statistics --- 00:07:42.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.791 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65280 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65280 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 65280 ']' 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:42.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:42.791 18:28:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:42.791 [2024-05-16 18:28:56.175677] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:42.791 [2024-05-16 18:28:56.175784] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.049 [2024-05-16 18:28:56.317548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.050 [2024-05-16 18:28:56.442742] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.050 [2024-05-16 18:28:56.443061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.050 [2024-05-16 18:28:56.443220] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.050 [2024-05-16 18:28:56.443369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.050 [2024-05-16 18:28:56.443418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.050 [2024-05-16 18:28:56.443691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.050 [2024-05-16 18:28:56.443844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.050 [2024-05-16 18:28:56.443848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.050 [2024-05-16 18:28:56.502654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:43.985 [2024-05-16 18:28:57.421074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.985 18:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:44.243 18:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:44.243 18:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:44.501 18:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:44.501 18:28:57 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:44.760 18:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:45.020 18:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a1331555-0a82-4224-8778-15117b724d3c 00:07:45.020 18:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1331555-0a82-4224-8778-15117b724d3c lvol 20 00:07:45.279 18:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9b9ca829-52a4-48cc-8f28-dce5bdac3d19 00:07:45.279 18:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:45.539 18:28:58 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b9ca829-52a4-48cc-8f28-dce5bdac3d19 00:07:45.799 18:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:46.057 [2024-05-16 18:28:59.384677] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:46.057 [2024-05-16 18:28:59.384980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.057 18:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:46.324 18:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65356 00:07:46.324 18:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:46.324 18:28:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:47.277 18:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 9b9ca829-52a4-48cc-8f28-dce5bdac3d19 MY_SNAPSHOT 00:07:47.537 18:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=287fdc06-48c2-4928-979f-06480a015ec3 00:07:47.537 18:29:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 9b9ca829-52a4-48cc-8f28-dce5bdac3d19 30 00:07:47.796 18:29:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 287fdc06-48c2-4928-979f-06480a015ec3 MY_CLONE 00:07:48.055 18:29:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6f4cd4f7-2f59-4504-870f-eb6d0d6b6b67 00:07:48.055 18:29:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 6f4cd4f7-2f59-4504-870f-eb6d0d6b6b67 00:07:48.623 18:29:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65356 00:07:56.739 Initializing NVMe Controllers 00:07:56.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:56.739 Controller IO queue size 128, less than required. 00:07:56.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:56.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:56.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:56.740 Initialization complete. Launching workers. 00:07:56.740 ======================================================== 00:07:56.740 Latency(us) 00:07:56.740 Device Information : IOPS MiB/s Average min max 00:07:56.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10632.70 41.53 12043.19 213.71 84254.99 00:07:56.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10602.90 41.42 12081.77 3078.59 63547.00 00:07:56.740 ======================================================== 00:07:56.740 Total : 21235.60 82.95 12062.45 213.71 84254.99 00:07:56.740 00:07:56.740 18:29:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.999 18:29:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9b9ca829-52a4-48cc-8f28-dce5bdac3d19 00:07:56.999 18:29:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1331555-0a82-4224-8778-15117b724d3c 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.566 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.566 rmmod nvme_tcp 00:07:57.567 rmmod nvme_fabrics 00:07:57.567 rmmod nvme_keyring 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65280 ']' 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65280 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 65280 ']' 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 65280 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65280 00:07:57.567 killing process with pid 65280 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65280' 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 65280 00:07:57.567 [2024-05-16 18:29:10.926740] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:57.567 18:29:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 65280 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:57.825 ************************************ 00:07:57.825 END TEST nvmf_lvol 00:07:57.825 ************************************ 00:07:57.825 00:07:57.825 real 0m15.566s 00:07:57.825 user 1m4.626s 00:07:57.825 sys 0m4.308s 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:57.825 18:29:11 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:57.825 18:29:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:57.825 18:29:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.825 18:29:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:57.825 ************************************ 00:07:57.825 START TEST nvmf_lvs_grow 00:07:57.825 ************************************ 00:07:57.825 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:58.084 * Looking for test storage... 00:07:58.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:58.084 Cannot find device "nvmf_tgt_br" 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:58.084 Cannot find device "nvmf_tgt_br2" 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:58.084 Cannot find device "nvmf_tgt_br" 00:07:58.084 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:58.085 Cannot find device "nvmf_tgt_br2" 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:58.085 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:58.343 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:58.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:07:58.343 00:07:58.343 --- 10.0.0.2 ping statistics --- 00:07:58.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.343 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:58.344 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:58.344 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:07:58.344 00:07:58.344 --- 10.0.0.3 ping statistics --- 00:07:58.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.344 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:58.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:07:58.344 00:07:58.344 --- 10.0.0.1 ping statistics --- 00:07:58.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.344 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65683 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65683 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 65683 ']' 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:58.344 18:29:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:58.344 [2024-05-16 18:29:11.815960] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:07:58.344 [2024-05-16 18:29:11.816080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.603 [2024-05-16 18:29:11.955097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.603 [2024-05-16 18:29:12.067067] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.603 [2024-05-16 18:29:12.067125] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.603 [2024-05-16 18:29:12.067137] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.603 [2024-05-16 18:29:12.067146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.603 [2024-05-16 18:29:12.067154] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.603 [2024-05-16 18:29:12.067179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.861 [2024-05-16 18:29:12.120050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:59.428 18:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:59.428 18:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:07:59.428 18:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.428 18:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:59.428 18:29:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.428 18:29:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.428 18:29:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:59.687 [2024-05-16 18:29:13.088933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:59.687 ************************************ 00:07:59.687 START TEST lvs_grow_clean 00:07:59.687 ************************************ 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:59.687 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:59.946 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:59.946 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:00.205 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:00.205 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:00.206 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:00.772 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:00.772 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:00.772 18:29:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 lvol 150 00:08:00.772 18:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bed845e6-b8d7-4eb6-91d2-8194e4d9d664 00:08:00.772 18:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:00.772 18:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:01.031 [2024-05-16 18:29:14.501520] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:01.031 [2024-05-16 18:29:14.501608] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:01.031 true 00:08:01.031 18:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:01.031 18:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:01.599 18:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:01.599 18:29:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:01.858 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bed845e6-b8d7-4eb6-91d2-8194e4d9d664 00:08:01.858 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:02.117 [2024-05-16 18:29:15.553870] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:02.117 [2024-05-16 18:29:15.554170] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.117 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.376 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65767 00:08:02.376 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65767 /var/tmp/bdevperf.sock 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 65767 ']' 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.377 18:29:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:02.636 [2024-05-16 18:29:15.920813] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:02.636 [2024-05-16 18:29:15.920960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65767 ] 00:08:02.636 [2024-05-16 18:29:16.060601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.895 [2024-05-16 18:29:16.232742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.895 [2024-05-16 18:29:16.290450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:03.464 18:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.464 18:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:08:03.464 18:29:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:04.053 Nvme0n1 00:08:04.053 18:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:04.053 [ 00:08:04.053 { 00:08:04.053 "name": "Nvme0n1", 00:08:04.053 "aliases": [ 00:08:04.053 "bed845e6-b8d7-4eb6-91d2-8194e4d9d664" 00:08:04.053 ], 00:08:04.053 "product_name": "NVMe disk", 00:08:04.053 "block_size": 4096, 00:08:04.053 "num_blocks": 38912, 00:08:04.053 "uuid": "bed845e6-b8d7-4eb6-91d2-8194e4d9d664", 00:08:04.053 "assigned_rate_limits": { 00:08:04.053 "rw_ios_per_sec": 0, 00:08:04.053 "rw_mbytes_per_sec": 0, 00:08:04.053 "r_mbytes_per_sec": 0, 00:08:04.053 "w_mbytes_per_sec": 0 00:08:04.053 }, 00:08:04.053 "claimed": false, 00:08:04.053 "zoned": false, 00:08:04.053 "supported_io_types": { 00:08:04.053 "read": true, 00:08:04.053 "write": true, 00:08:04.053 "unmap": true, 00:08:04.053 "write_zeroes": true, 00:08:04.053 "flush": true, 00:08:04.053 "reset": true, 00:08:04.053 "compare": true, 00:08:04.053 "compare_and_write": true, 00:08:04.053 "abort": true, 00:08:04.053 "nvme_admin": true, 00:08:04.053 "nvme_io": true 00:08:04.053 }, 00:08:04.053 "memory_domains": [ 00:08:04.053 { 00:08:04.053 "dma_device_id": "system", 00:08:04.053 "dma_device_type": 1 00:08:04.053 } 00:08:04.053 ], 00:08:04.053 "driver_specific": { 00:08:04.053 "nvme": [ 00:08:04.053 { 00:08:04.053 "trid": { 00:08:04.053 "trtype": "TCP", 00:08:04.053 "adrfam": "IPv4", 00:08:04.053 "traddr": "10.0.0.2", 00:08:04.053 "trsvcid": "4420", 00:08:04.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:04.053 }, 00:08:04.053 "ctrlr_data": { 00:08:04.053 "cntlid": 1, 00:08:04.053 "vendor_id": "0x8086", 00:08:04.053 "model_number": "SPDK bdev Controller", 00:08:04.053 "serial_number": "SPDK0", 00:08:04.053 "firmware_revision": "24.09", 00:08:04.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:04.053 "oacs": { 00:08:04.053 "security": 0, 00:08:04.053 "format": 0, 00:08:04.053 "firmware": 0, 00:08:04.053 "ns_manage": 0 00:08:04.053 }, 00:08:04.053 "multi_ctrlr": true, 00:08:04.053 "ana_reporting": false 00:08:04.053 }, 00:08:04.053 "vs": { 00:08:04.053 "nvme_version": "1.3" 00:08:04.053 }, 00:08:04.053 "ns_data": { 00:08:04.053 "id": 1, 00:08:04.053 "can_share": true 00:08:04.053 } 00:08:04.053 } 00:08:04.053 ], 00:08:04.053 "mp_policy": "active_passive" 00:08:04.053 } 00:08:04.053 } 00:08:04.053 ] 00:08:04.053 18:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65790 00:08:04.053 18:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:04.053 18:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:04.313 Running I/O for 10 seconds... 00:08:05.249 Latency(us) 00:08:05.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.249 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:05.249 =================================================================================================================== 00:08:05.249 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:05.249 00:08:06.185 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:06.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.185 Nvme0n1 : 2.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:06.185 =================================================================================================================== 00:08:06.185 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:06.185 00:08:06.443 true 00:08:06.443 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:06.443 18:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:06.702 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:06.702 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:06.702 18:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65790 00:08:07.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.269 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:07.269 =================================================================================================================== 00:08:07.269 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:07.269 00:08:08.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.222 Nvme0n1 : 4.00 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:08:08.222 =================================================================================================================== 00:08:08.222 Total : 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:08:08.222 00:08:09.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.170 Nvme0n1 : 5.00 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:08:09.170 =================================================================================================================== 00:08:09.170 Total : 7442.20 29.07 0.00 0.00 0.00 0.00 0.00 00:08:09.170 00:08:10.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.126 Nvme0n1 : 6.00 7471.83 29.19 0.00 0.00 0.00 0.00 0.00 00:08:10.126 =================================================================================================================== 00:08:10.126 Total : 7471.83 29.19 0.00 0.00 0.00 0.00 0.00 00:08:10.126 00:08:11.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.502 Nvme0n1 : 7.00 7402.29 28.92 0.00 0.00 0.00 0.00 0.00 00:08:11.502 =================================================================================================================== 00:08:11.502 Total : 7402.29 28.92 0.00 0.00 0.00 0.00 0.00 00:08:11.502 00:08:12.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.437 Nvme0n1 : 8.00 7286.62 28.46 0.00 0.00 0.00 0.00 0.00 00:08:12.437 =================================================================================================================== 00:08:12.437 Total : 7286.62 28.46 0.00 0.00 0.00 0.00 0.00 00:08:12.437 00:08:13.372 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.372 Nvme0n1 : 9.00 7168.44 28.00 0.00 0.00 0.00 0.00 0.00 00:08:13.372 =================================================================================================================== 00:08:13.372 Total : 7168.44 28.00 0.00 0.00 0.00 0.00 0.00 00:08:13.372 00:08:14.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.309 Nvme0n1 : 10.00 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:08:14.309 =================================================================================================================== 00:08:14.309 Total : 7086.60 27.68 0.00 0.00 0.00 0.00 0.00 00:08:14.309 00:08:14.309 00:08:14.309 Latency(us) 00:08:14.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.309 Nvme0n1 : 10.01 7094.86 27.71 0.00 0.00 18036.29 14834.97 51713.86 00:08:14.309 =================================================================================================================== 00:08:14.309 Total : 7094.86 27.71 0.00 0.00 18036.29 14834.97 51713.86 00:08:14.309 0 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65767 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 65767 ']' 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 65767 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65767 00:08:14.309 killing process with pid 65767 00:08:14.309 Received shutdown signal, test time was about 10.000000 seconds 00:08:14.309 00:08:14.309 Latency(us) 00:08:14.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.309 =================================================================================================================== 00:08:14.309 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65767' 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 65767 00:08:14.309 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 65767 00:08:14.570 18:29:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.829 18:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.092 18:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:15.092 18:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:15.356 18:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:15.356 18:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:15.356 18:29:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:15.615 [2024-05-16 18:29:29.041531] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:15.615 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:15.874 request: 00:08:15.874 { 00:08:15.874 "uuid": "8a35a852-9e62-41d4-b57f-e8e1b0b697a2", 00:08:15.874 "method": "bdev_lvol_get_lvstores", 00:08:15.874 "req_id": 1 00:08:15.874 } 00:08:15.874 Got JSON-RPC error response 00:08:15.874 response: 00:08:15.874 { 00:08:15.874 "code": -19, 00:08:15.874 "message": "No such device" 00:08:15.874 } 00:08:15.874 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:15.874 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:15.874 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:15.874 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:15.874 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:16.442 aio_bdev 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bed845e6-b8d7-4eb6-91d2-8194e4d9d664 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=bed845e6-b8d7-4eb6-91d2-8194e4d9d664 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:16.442 18:29:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bed845e6-b8d7-4eb6-91d2-8194e4d9d664 -t 2000 00:08:17.013 [ 00:08:17.013 { 00:08:17.013 "name": "bed845e6-b8d7-4eb6-91d2-8194e4d9d664", 00:08:17.013 "aliases": [ 00:08:17.013 "lvs/lvol" 00:08:17.013 ], 00:08:17.013 "product_name": "Logical Volume", 00:08:17.013 "block_size": 4096, 00:08:17.013 "num_blocks": 38912, 00:08:17.013 "uuid": "bed845e6-b8d7-4eb6-91d2-8194e4d9d664", 00:08:17.013 "assigned_rate_limits": { 00:08:17.013 "rw_ios_per_sec": 0, 00:08:17.013 "rw_mbytes_per_sec": 0, 00:08:17.013 "r_mbytes_per_sec": 0, 00:08:17.013 "w_mbytes_per_sec": 0 00:08:17.013 }, 00:08:17.013 "claimed": false, 00:08:17.013 "zoned": false, 00:08:17.013 "supported_io_types": { 00:08:17.013 "read": true, 00:08:17.013 "write": true, 00:08:17.013 "unmap": true, 00:08:17.013 "write_zeroes": true, 00:08:17.013 "flush": false, 00:08:17.013 "reset": true, 00:08:17.013 "compare": false, 00:08:17.013 "compare_and_write": false, 00:08:17.013 "abort": false, 00:08:17.013 "nvme_admin": false, 00:08:17.013 "nvme_io": false 00:08:17.013 }, 00:08:17.013 "driver_specific": { 00:08:17.013 "lvol": { 00:08:17.013 "lvol_store_uuid": "8a35a852-9e62-41d4-b57f-e8e1b0b697a2", 00:08:17.013 "base_bdev": "aio_bdev", 00:08:17.013 "thin_provision": false, 00:08:17.013 "num_allocated_clusters": 38, 00:08:17.013 "snapshot": false, 00:08:17.013 "clone": false, 00:08:17.013 "esnap_clone": false 00:08:17.013 } 00:08:17.013 } 00:08:17.013 } 00:08:17.013 ] 00:08:17.013 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:08:17.013 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:17.013 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:17.013 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:17.013 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:17.013 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:17.271 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:17.271 18:29:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bed845e6-b8d7-4eb6-91d2-8194e4d9d664 00:08:17.838 18:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a35a852-9e62-41d4-b57f-e8e1b0b697a2 00:08:18.096 18:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.354 18:29:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:18.613 ************************************ 00:08:18.613 END TEST lvs_grow_clean 00:08:18.613 ************************************ 00:08:18.613 00:08:18.613 real 0m18.933s 00:08:18.613 user 0m17.904s 00:08:18.613 sys 0m2.580s 00:08:18.613 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.613 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:18.613 18:29:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:18.613 18:29:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:18.613 18:29:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.613 18:29:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:18.613 ************************************ 00:08:18.613 START TEST lvs_grow_dirty 00:08:18.613 ************************************ 00:08:18.613 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:18.872 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:19.131 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:19.131 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:19.390 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:19.390 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:19.390 18:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:19.649 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:19.649 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:19.649 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cad88364-025e-4e93-8d5b-fea14a1e7200 lvol 150 00:08:19.908 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=93ce64f1-0f08-4c4c-99f8-d807c3dd9410 00:08:19.908 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:19.908 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:20.167 [2024-05-16 18:29:33.570922] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:20.167 [2024-05-16 18:29:33.571035] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:20.167 true 00:08:20.167 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:20.167 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:20.426 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:20.426 18:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:20.684 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 93ce64f1-0f08-4c4c-99f8-d807c3dd9410 00:08:20.942 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:21.200 [2024-05-16 18:29:34.651547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.200 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66047 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66047 /var/tmp/bdevperf.sock 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 66047 ']' 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:21.460 18:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.718 [2024-05-16 18:29:35.004616] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:21.718 [2024-05-16 18:29:35.005447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66047 ] 00:08:21.718 [2024-05-16 18:29:35.147210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.977 [2024-05-16 18:29:35.272857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.977 [2024-05-16 18:29:35.328233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:22.545 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:22.545 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:08:22.545 18:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:22.803 Nvme0n1 00:08:23.063 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:23.322 [ 00:08:23.322 { 00:08:23.322 "name": "Nvme0n1", 00:08:23.322 "aliases": [ 00:08:23.322 "93ce64f1-0f08-4c4c-99f8-d807c3dd9410" 00:08:23.322 ], 00:08:23.322 "product_name": "NVMe disk", 00:08:23.322 "block_size": 4096, 00:08:23.322 "num_blocks": 38912, 00:08:23.322 "uuid": "93ce64f1-0f08-4c4c-99f8-d807c3dd9410", 00:08:23.322 "assigned_rate_limits": { 00:08:23.322 "rw_ios_per_sec": 0, 00:08:23.322 "rw_mbytes_per_sec": 0, 00:08:23.322 "r_mbytes_per_sec": 0, 00:08:23.322 "w_mbytes_per_sec": 0 00:08:23.322 }, 00:08:23.322 "claimed": false, 00:08:23.322 "zoned": false, 00:08:23.322 "supported_io_types": { 00:08:23.322 "read": true, 00:08:23.322 "write": true, 00:08:23.322 "unmap": true, 00:08:23.322 "write_zeroes": true, 00:08:23.322 "flush": true, 00:08:23.322 "reset": true, 00:08:23.322 "compare": true, 00:08:23.322 "compare_and_write": true, 00:08:23.322 "abort": true, 00:08:23.322 "nvme_admin": true, 00:08:23.322 "nvme_io": true 00:08:23.322 }, 00:08:23.322 "memory_domains": [ 00:08:23.322 { 00:08:23.322 "dma_device_id": "system", 00:08:23.322 "dma_device_type": 1 00:08:23.322 } 00:08:23.322 ], 00:08:23.322 "driver_specific": { 00:08:23.322 "nvme": [ 00:08:23.322 { 00:08:23.322 "trid": { 00:08:23.322 "trtype": "TCP", 00:08:23.322 "adrfam": "IPv4", 00:08:23.322 "traddr": "10.0.0.2", 00:08:23.322 "trsvcid": "4420", 00:08:23.322 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:23.322 }, 00:08:23.322 "ctrlr_data": { 00:08:23.322 "cntlid": 1, 00:08:23.322 "vendor_id": "0x8086", 00:08:23.322 "model_number": "SPDK bdev Controller", 00:08:23.322 "serial_number": "SPDK0", 00:08:23.322 "firmware_revision": "24.09", 00:08:23.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:23.322 "oacs": { 00:08:23.322 "security": 0, 00:08:23.322 "format": 0, 00:08:23.322 "firmware": 0, 00:08:23.322 "ns_manage": 0 00:08:23.322 }, 00:08:23.322 "multi_ctrlr": true, 00:08:23.322 "ana_reporting": false 00:08:23.322 }, 00:08:23.322 "vs": { 00:08:23.322 "nvme_version": "1.3" 00:08:23.322 }, 00:08:23.322 "ns_data": { 00:08:23.322 "id": 1, 00:08:23.322 "can_share": true 00:08:23.322 } 00:08:23.322 } 00:08:23.322 ], 00:08:23.322 "mp_policy": "active_passive" 00:08:23.322 } 00:08:23.322 } 00:08:23.322 ] 00:08:23.322 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66076 00:08:23.322 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:23.322 18:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:23.322 Running I/O for 10 seconds... 00:08:24.273 Latency(us) 00:08:24.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.273 Nvme0n1 : 1.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:24.274 =================================================================================================================== 00:08:24.274 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:24.274 00:08:25.211 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:25.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.470 Nvme0n1 : 2.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:25.470 =================================================================================================================== 00:08:25.470 Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:08:25.470 00:08:25.470 true 00:08:25.470 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:25.470 18:29:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:25.729 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:25.729 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:25.729 18:29:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66076 00:08:26.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.296 Nvme0n1 : 3.00 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:26.296 =================================================================================================================== 00:08:26.296 Total : 6434.67 25.14 0.00 0.00 0.00 0.00 0.00 00:08:26.296 00:08:27.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.675 Nvme0n1 : 4.00 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:08:27.675 =================================================================================================================== 00:08:27.675 Total : 6508.75 25.42 0.00 0.00 0.00 0.00 0.00 00:08:27.675 00:08:28.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.245 Nvme0n1 : 5.00 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:08:28.245 =================================================================================================================== 00:08:28.245 Total : 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:08:28.245 00:08:29.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.646 Nvme0n1 : 6.00 6186.50 24.17 0.00 0.00 0.00 0.00 0.00 00:08:29.646 =================================================================================================================== 00:08:29.646 Total : 6186.50 24.17 0.00 0.00 0.00 0.00 0.00 00:08:29.646 00:08:30.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.582 Nvme0n1 : 7.00 6191.71 24.19 0.00 0.00 0.00 0.00 0.00 00:08:30.582 =================================================================================================================== 00:08:30.582 Total : 6191.71 24.19 0.00 0.00 0.00 0.00 0.00 00:08:30.582 00:08:31.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.519 Nvme0n1 : 8.00 6179.75 24.14 0.00 0.00 0.00 0.00 0.00 00:08:31.519 =================================================================================================================== 00:08:31.519 Total : 6179.75 24.14 0.00 0.00 0.00 0.00 0.00 00:08:31.519 00:08:32.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.456 Nvme0n1 : 9.00 6198.67 24.21 0.00 0.00 0.00 0.00 0.00 00:08:32.456 =================================================================================================================== 00:08:32.456 Total : 6198.67 24.21 0.00 0.00 0.00 0.00 0.00 00:08:32.456 00:08:33.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.392 Nvme0n1 : 10.00 6201.10 24.22 0.00 0.00 0.00 0.00 0.00 00:08:33.392 =================================================================================================================== 00:08:33.392 Total : 6201.10 24.22 0.00 0.00 0.00 0.00 0.00 00:08:33.392 00:08:33.392 00:08:33.392 Latency(us) 00:08:33.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.392 Nvme0n1 : 10.00 6211.60 24.26 0.00 0.00 20599.34 10902.81 318385.80 00:08:33.392 =================================================================================================================== 00:08:33.392 Total : 6211.60 24.26 0.00 0.00 20599.34 10902.81 318385.80 00:08:33.392 0 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66047 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 66047 ']' 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 66047 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66047 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66047' 00:08:33.392 killing process with pid 66047 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 66047 00:08:33.392 Received shutdown signal, test time was about 10.000000 seconds 00:08:33.392 00:08:33.392 Latency(us) 00:08:33.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.392 =================================================================================================================== 00:08:33.392 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:33.392 18:29:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 66047 00:08:33.651 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.910 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65683 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65683 00:08:34.479 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65683 Killed "${NVMF_APP[@]}" "$@" 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66209 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66209 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 66209 ']' 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:34.479 18:29:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:34.738 [2024-05-16 18:29:48.012055] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:34.738 [2024-05-16 18:29:48.012449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.738 [2024-05-16 18:29:48.151964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.998 [2024-05-16 18:29:48.307845] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.998 [2024-05-16 18:29:48.308251] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.998 [2024-05-16 18:29:48.308437] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.998 [2024-05-16 18:29:48.308451] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.998 [2024-05-16 18:29:48.308458] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.998 [2024-05-16 18:29:48.308488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.998 [2024-05-16 18:29:48.387002] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:35.629 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:35.629 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:08:35.629 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.629 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.629 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:35.629 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.629 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.902 [2024-05-16 18:29:49.303342] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:35.902 [2024-05-16 18:29:49.303899] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:35.902 [2024-05-16 18:29:49.304271] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 93ce64f1-0f08-4c4c-99f8-d807c3dd9410 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=93ce64f1-0f08-4c4c-99f8-d807c3dd9410 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:35.902 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:36.161 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93ce64f1-0f08-4c4c-99f8-d807c3dd9410 -t 2000 00:08:36.420 [ 00:08:36.420 { 00:08:36.420 "name": "93ce64f1-0f08-4c4c-99f8-d807c3dd9410", 00:08:36.420 "aliases": [ 00:08:36.420 "lvs/lvol" 00:08:36.420 ], 00:08:36.420 "product_name": "Logical Volume", 00:08:36.420 "block_size": 4096, 00:08:36.420 "num_blocks": 38912, 00:08:36.420 "uuid": "93ce64f1-0f08-4c4c-99f8-d807c3dd9410", 00:08:36.420 "assigned_rate_limits": { 00:08:36.420 "rw_ios_per_sec": 0, 00:08:36.420 "rw_mbytes_per_sec": 0, 00:08:36.420 "r_mbytes_per_sec": 0, 00:08:36.420 "w_mbytes_per_sec": 0 00:08:36.420 }, 00:08:36.420 "claimed": false, 00:08:36.420 "zoned": false, 00:08:36.420 "supported_io_types": { 00:08:36.420 "read": true, 00:08:36.420 "write": true, 00:08:36.420 "unmap": true, 00:08:36.420 "write_zeroes": true, 00:08:36.420 "flush": false, 00:08:36.420 "reset": true, 00:08:36.420 "compare": false, 00:08:36.420 "compare_and_write": false, 00:08:36.420 "abort": false, 00:08:36.420 "nvme_admin": false, 00:08:36.420 "nvme_io": false 00:08:36.420 }, 00:08:36.420 "driver_specific": { 00:08:36.420 "lvol": { 00:08:36.420 "lvol_store_uuid": "cad88364-025e-4e93-8d5b-fea14a1e7200", 00:08:36.420 "base_bdev": "aio_bdev", 00:08:36.420 "thin_provision": false, 00:08:36.420 "num_allocated_clusters": 38, 00:08:36.420 "snapshot": false, 00:08:36.420 "clone": false, 00:08:36.420 "esnap_clone": false 00:08:36.420 } 00:08:36.420 } 00:08:36.420 } 00:08:36.420 ] 00:08:36.678 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:08:36.678 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:36.678 18:29:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:36.937 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:36.937 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:36.937 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:37.196 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:37.196 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:37.455 [2024-05-16 18:29:50.721072] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:37.455 18:29:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:37.714 request: 00:08:37.714 { 00:08:37.714 "uuid": "cad88364-025e-4e93-8d5b-fea14a1e7200", 00:08:37.714 "method": "bdev_lvol_get_lvstores", 00:08:37.714 "req_id": 1 00:08:37.714 } 00:08:37.714 Got JSON-RPC error response 00:08:37.714 response: 00:08:37.714 { 00:08:37.714 "code": -19, 00:08:37.714 "message": "No such device" 00:08:37.714 } 00:08:37.714 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:37.714 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:37.714 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:37.714 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:37.714 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.973 aio_bdev 00:08:37.973 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 93ce64f1-0f08-4c4c-99f8-d807c3dd9410 00:08:37.973 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=93ce64f1-0f08-4c4c-99f8-d807c3dd9410 00:08:37.973 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:08:37.973 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:08:37.973 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:08:37.973 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:08:37.973 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:38.232 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 93ce64f1-0f08-4c4c-99f8-d807c3dd9410 -t 2000 00:08:38.490 [ 00:08:38.490 { 00:08:38.490 "name": "93ce64f1-0f08-4c4c-99f8-d807c3dd9410", 00:08:38.490 "aliases": [ 00:08:38.490 "lvs/lvol" 00:08:38.490 ], 00:08:38.490 "product_name": "Logical Volume", 00:08:38.490 "block_size": 4096, 00:08:38.490 "num_blocks": 38912, 00:08:38.490 "uuid": "93ce64f1-0f08-4c4c-99f8-d807c3dd9410", 00:08:38.490 "assigned_rate_limits": { 00:08:38.490 "rw_ios_per_sec": 0, 00:08:38.490 "rw_mbytes_per_sec": 0, 00:08:38.490 "r_mbytes_per_sec": 0, 00:08:38.490 "w_mbytes_per_sec": 0 00:08:38.490 }, 00:08:38.490 "claimed": false, 00:08:38.490 "zoned": false, 00:08:38.490 "supported_io_types": { 00:08:38.490 "read": true, 00:08:38.490 "write": true, 00:08:38.490 "unmap": true, 00:08:38.490 "write_zeroes": true, 00:08:38.490 "flush": false, 00:08:38.490 "reset": true, 00:08:38.490 "compare": false, 00:08:38.490 "compare_and_write": false, 00:08:38.490 "abort": false, 00:08:38.490 "nvme_admin": false, 00:08:38.490 "nvme_io": false 00:08:38.490 }, 00:08:38.490 "driver_specific": { 00:08:38.490 "lvol": { 00:08:38.490 "lvol_store_uuid": "cad88364-025e-4e93-8d5b-fea14a1e7200", 00:08:38.490 "base_bdev": "aio_bdev", 00:08:38.490 "thin_provision": false, 00:08:38.490 "num_allocated_clusters": 38, 00:08:38.490 "snapshot": false, 00:08:38.490 "clone": false, 00:08:38.490 "esnap_clone": false 00:08:38.490 } 00:08:38.490 } 00:08:38.490 } 00:08:38.490 ] 00:08:38.490 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:08:38.490 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:38.490 18:29:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:38.750 18:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:38.750 18:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:38.750 18:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:39.009 18:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:39.009 18:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 93ce64f1-0f08-4c4c-99f8-d807c3dd9410 00:08:39.268 18:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cad88364-025e-4e93-8d5b-fea14a1e7200 00:08:39.528 18:29:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.787 18:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:40.354 ************************************ 00:08:40.354 END TEST lvs_grow_dirty 00:08:40.354 ************************************ 00:08:40.354 00:08:40.354 real 0m21.505s 00:08:40.354 user 0m45.139s 00:08:40.354 sys 0m8.526s 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:40.354 nvmf_trace.0 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.354 rmmod nvme_tcp 00:08:40.354 rmmod nvme_fabrics 00:08:40.354 rmmod nvme_keyring 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66209 ']' 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66209 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 66209 ']' 00:08:40.354 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 66209 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66209 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:40.355 killing process with pid 66209 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66209' 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 66209 00:08:40.355 18:29:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 66209 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:40.614 00:08:40.614 real 0m42.818s 00:08:40.614 user 1m9.668s 00:08:40.614 sys 0m11.831s 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:40.614 18:29:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.614 ************************************ 00:08:40.614 END TEST nvmf_lvs_grow 00:08:40.614 ************************************ 00:08:40.873 18:29:54 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:40.873 18:29:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:40.873 18:29:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:40.873 18:29:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.873 ************************************ 00:08:40.873 START TEST nvmf_bdev_io_wait 00:08:40.873 ************************************ 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:40.873 * Looking for test storage... 00:08:40.873 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.873 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:40.874 Cannot find device "nvmf_tgt_br" 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:40.874 Cannot find device "nvmf_tgt_br2" 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:40.874 Cannot find device "nvmf_tgt_br" 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:40.874 Cannot find device "nvmf_tgt_br2" 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:40.874 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:41.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:41.134 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:41.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:08:41.134 00:08:41.134 --- 10.0.0.2 ping statistics --- 00:08:41.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.134 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:41.134 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:41.134 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:41.134 00:08:41.134 --- 10.0.0.3 ping statistics --- 00:08:41.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.134 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:41.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:41.134 00:08:41.134 --- 10.0.0.1 ping statistics --- 00:08:41.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.134 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66533 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66533 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 66533 ']' 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:41.134 18:29:54 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.395 [2024-05-16 18:29:54.673042] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:41.395 [2024-05-16 18:29:54.673149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.395 [2024-05-16 18:29:54.815030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.653 [2024-05-16 18:29:54.937261] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.653 [2024-05-16 18:29:54.937485] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.653 [2024-05-16 18:29:54.937703] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.653 [2024-05-16 18:29:54.937962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.653 [2024-05-16 18:29:54.938147] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.653 [2024-05-16 18:29:54.938470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.653 [2024-05-16 18:29:54.938580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.653 [2024-05-16 18:29:54.938621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.653 [2024-05-16 18:29:54.938626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.221 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:42.221 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.222 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.481 [2024-05-16 18:29:55.767288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.481 [2024-05-16 18:29:55.785168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.481 Malloc0 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:42.481 [2024-05-16 18:29:55.859543] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:42.481 [2024-05-16 18:29:55.859952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.481 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66568 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66570 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66572 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:42.482 { 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme$subsystem", 00:08:42.482 "trtype": "$TEST_TRANSPORT", 00:08:42.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "$NVMF_PORT", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.482 "hdgst": ${hdgst:-false}, 00:08:42.482 "ddgst": ${ddgst:-false} 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 } 00:08:42.482 EOF 00:08:42.482 )") 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:42.482 { 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme$subsystem", 00:08:42.482 "trtype": "$TEST_TRANSPORT", 00:08:42.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "$NVMF_PORT", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.482 "hdgst": ${hdgst:-false}, 00:08:42.482 "ddgst": ${ddgst:-false} 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 } 00:08:42.482 EOF 00:08:42.482 )") 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66574 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:42.482 { 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme$subsystem", 00:08:42.482 "trtype": "$TEST_TRANSPORT", 00:08:42.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "$NVMF_PORT", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.482 "hdgst": ${hdgst:-false}, 00:08:42.482 "ddgst": ${ddgst:-false} 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 } 00:08:42.482 EOF 00:08:42.482 )") 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:42.482 { 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme$subsystem", 00:08:42.482 "trtype": "$TEST_TRANSPORT", 00:08:42.482 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "$NVMF_PORT", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.482 "hdgst": ${hdgst:-false}, 00:08:42.482 "ddgst": ${ddgst:-false} 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 } 00:08:42.482 EOF 00:08:42.482 )") 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme1", 00:08:42.482 "trtype": "tcp", 00:08:42.482 "traddr": "10.0.0.2", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "4420", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.482 "hdgst": false, 00:08:42.482 "ddgst": false 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 }' 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme1", 00:08:42.482 "trtype": "tcp", 00:08:42.482 "traddr": "10.0.0.2", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "4420", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.482 "hdgst": false, 00:08:42.482 "ddgst": false 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 }' 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme1", 00:08:42.482 "trtype": "tcp", 00:08:42.482 "traddr": "10.0.0.2", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "4420", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.482 "hdgst": false, 00:08:42.482 "ddgst": false 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 }' 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:42.482 "params": { 00:08:42.482 "name": "Nvme1", 00:08:42.482 "trtype": "tcp", 00:08:42.482 "traddr": "10.0.0.2", 00:08:42.482 "adrfam": "ipv4", 00:08:42.482 "trsvcid": "4420", 00:08:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:42.482 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:42.482 "hdgst": false, 00:08:42.482 "ddgst": false 00:08:42.482 }, 00:08:42.482 "method": "bdev_nvme_attach_controller" 00:08:42.482 }' 00:08:42.482 [2024-05-16 18:29:55.918127] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:42.482 [2024-05-16 18:29:55.918442] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:42.482 [2024-05-16 18:29:55.922110] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:42.482 [2024-05-16 18:29:55.922333] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-16 18:29:55.922351] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:42.482 [2024-05-16 18:29:55.922451] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:42.482 .cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:42.482 18:29:55 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66568 00:08:42.482 [2024-05-16 18:29:55.956525] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:42.482 [2024-05-16 18:29:55.956624] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:42.742 [2024-05-16 18:29:56.135329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.742 [2024-05-16 18:29:56.213007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.001 [2024-05-16 18:29:56.253803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:08:43.001 [2024-05-16 18:29:56.320214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.001 [2024-05-16 18:29:56.328297] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.001 [2024-05-16 18:29:56.344962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:08:43.001 [2024-05-16 18:29:56.401423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.001 [2024-05-16 18:29:56.405453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.001 Running I/O for 1 seconds... 00:08:43.001 [2024-05-16 18:29:56.450376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:43.001 Running I/O for 1 seconds... 00:08:43.260 [2024-05-16 18:29:56.506043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:08:43.260 [2024-05-16 18:29:56.513237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.260 [2024-05-16 18:29:56.553594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:43.260 Running I/O for 1 seconds... 00:08:43.260 Running I/O for 1 seconds... 00:08:44.197 00:08:44.197 Latency(us) 00:08:44.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.197 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:44.197 Nvme1n1 : 1.03 4677.31 18.27 0.00 0.00 26768.84 9711.24 60054.81 00:08:44.197 =================================================================================================================== 00:08:44.197 Total : 4677.31 18.27 0.00 0.00 26768.84 9711.24 60054.81 00:08:44.197 00:08:44.197 Latency(us) 00:08:44.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.197 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:44.197 Nvme1n1 : 1.00 166929.72 652.07 0.00 0.00 763.91 355.61 1258.59 00:08:44.197 =================================================================================================================== 00:08:44.197 Total : 166929.72 652.07 0.00 0.00 763.91 355.61 1258.59 00:08:44.197 00:08:44.197 Latency(us) 00:08:44.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.197 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:44.197 Nvme1n1 : 1.01 5971.53 23.33 0.00 0.00 21281.76 6047.19 33602.09 00:08:44.197 =================================================================================================================== 00:08:44.197 Total : 5971.53 23.33 0.00 0.00 21281.76 6047.19 33602.09 00:08:44.197 00:08:44.197 Latency(us) 00:08:44.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.197 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:44.197 Nvme1n1 : 1.01 3980.16 15.55 0.00 0.00 32009.98 9234.62 68157.44 00:08:44.197 =================================================================================================================== 00:08:44.197 Total : 3980.16 15.55 0.00 0.00 32009.98 9234.62 68157.44 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66570 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66572 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66574 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.456 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:44.715 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.715 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:44.715 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.715 18:29:57 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.715 rmmod nvme_tcp 00:08:44.715 rmmod nvme_fabrics 00:08:44.715 rmmod nvme_keyring 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66533 ']' 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66533 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 66533 ']' 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 66533 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66533 00:08:44.715 killing process with pid 66533 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66533' 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 66533 00:08:44.715 [2024-05-16 18:29:58.065677] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:44.715 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 66533 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:44.974 ************************************ 00:08:44.974 END TEST nvmf_bdev_io_wait 00:08:44.974 ************************************ 00:08:44.974 00:08:44.974 real 0m4.247s 00:08:44.974 user 0m18.669s 00:08:44.974 sys 0m2.151s 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:44.974 18:29:58 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:44.974 18:29:58 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:44.974 18:29:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:44.974 18:29:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:44.974 18:29:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:44.974 ************************************ 00:08:44.974 START TEST nvmf_queue_depth 00:08:44.974 ************************************ 00:08:44.974 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.233 * Looking for test storage... 00:08:45.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:45.233 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:45.234 Cannot find device "nvmf_tgt_br" 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:45.234 Cannot find device "nvmf_tgt_br2" 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:45.234 Cannot find device "nvmf_tgt_br" 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:45.234 Cannot find device "nvmf_tgt_br2" 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:45.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:45.234 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:45.234 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:45.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:08:45.493 00:08:45.493 --- 10.0.0.2 ping statistics --- 00:08:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.493 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:45.493 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:45.493 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:45.493 00:08:45.493 --- 10.0.0.3 ping statistics --- 00:08:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.493 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:45.493 00:08:45.493 --- 10.0.0.1 ping statistics --- 00:08:45.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.493 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=66813 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 66813 00:08:45.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 66813 ']' 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:45.493 18:29:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.493 [2024-05-16 18:29:58.963717] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:45.494 [2024-05-16 18:29:58.964312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.753 [2024-05-16 18:29:59.103107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.012 [2024-05-16 18:29:59.267910] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.012 [2024-05-16 18:29:59.267968] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.012 [2024-05-16 18:29:59.267982] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.012 [2024-05-16 18:29:59.267994] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.012 [2024-05-16 18:29:59.268003] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.012 [2024-05-16 18:29:59.268042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.012 [2024-05-16 18:29:59.346349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 [2024-05-16 18:29:59.948979] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 Malloc0 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.580 18:29:59 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 [2024-05-16 18:30:00.018233] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:46.580 [2024-05-16 18:30:00.018554] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66845 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66845 /var/tmp/bdevperf.sock 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 66845 ']' 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:46.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:46.580 18:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.580 [2024-05-16 18:30:00.076756] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:46.580 [2024-05-16 18:30:00.076892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66845 ] 00:08:46.839 [2024-05-16 18:30:00.217501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.097 [2024-05-16 18:30:00.372964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.097 [2024-05-16 18:30:00.447274] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:47.664 18:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:47.664 18:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:08:47.664 18:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:47.664 18:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.664 18:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:47.923 NVMe0n1 00:08:47.923 18:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.923 18:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:47.923 Running I/O for 10 seconds... 00:08:57.909 00:08:57.909 Latency(us) 00:08:57.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.909 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:57.909 Verification LBA range: start 0x0 length 0x4000 00:08:57.909 NVMe0n1 : 10.07 7868.12 30.73 0.00 0.00 129478.40 12868.89 95325.09 00:08:57.909 =================================================================================================================== 00:08:57.909 Total : 7868.12 30.73 0.00 0.00 129478.40 12868.89 95325.09 00:08:57.909 0 00:08:57.909 18:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66845 00:08:57.909 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 66845 ']' 00:08:57.909 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 66845 00:08:57.909 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:08:57.909 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:57.909 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66845 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66845' 00:08:58.168 killing process with pid 66845 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 66845 00:08:58.168 Received shutdown signal, test time was about 10.000000 seconds 00:08:58.168 00:08:58.168 Latency(us) 00:08:58.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.168 =================================================================================================================== 00:08:58.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 66845 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.168 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.426 rmmod nvme_tcp 00:08:58.426 rmmod nvme_fabrics 00:08:58.426 rmmod nvme_keyring 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 66813 ']' 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 66813 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 66813 ']' 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 66813 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66813 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:58.426 killing process with pid 66813 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66813' 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 66813 00:08:58.426 [2024-05-16 18:30:11.778331] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:58.426 18:30:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 66813 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:58.685 00:08:58.685 real 0m13.706s 00:08:58.685 user 0m23.549s 00:08:58.685 sys 0m2.392s 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:58.685 18:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.685 ************************************ 00:08:58.685 END TEST nvmf_queue_depth 00:08:58.685 ************************************ 00:08:58.943 18:30:12 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:58.944 18:30:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:58.944 18:30:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.944 18:30:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.944 ************************************ 00:08:58.944 START TEST nvmf_target_multipath 00:08:58.944 ************************************ 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:58.944 * Looking for test storage... 00:08:58.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:58.944 Cannot find device "nvmf_tgt_br" 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.944 Cannot find device "nvmf_tgt_br2" 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:58.944 Cannot find device "nvmf_tgt_br" 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:58.944 Cannot find device "nvmf_tgt_br2" 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:58.944 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:59.203 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.203 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:59.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:08:59.204 00:08:59.204 --- 10.0.0.2 ping statistics --- 00:08:59.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.204 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:59.204 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.204 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:08:59.204 00:08:59.204 --- 10.0.0.3 ping statistics --- 00:08:59.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.204 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:59.204 00:08:59.204 --- 10.0.0.1 ping statistics --- 00:08:59.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.204 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:59.204 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67163 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67163 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 67163 ']' 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:59.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:59.462 18:30:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:59.462 [2024-05-16 18:30:12.776959] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:08:59.462 [2024-05-16 18:30:12.777070] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.462 [2024-05-16 18:30:12.918579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.721 [2024-05-16 18:30:13.056938] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.721 [2024-05-16 18:30:13.056999] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.721 [2024-05-16 18:30:13.057012] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.721 [2024-05-16 18:30:13.057023] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.721 [2024-05-16 18:30:13.057032] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.721 [2024-05-16 18:30:13.057249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.721 [2024-05-16 18:30:13.057460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.721 [2024-05-16 18:30:13.058190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:59.721 [2024-05-16 18:30:13.058202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.721 [2024-05-16 18:30:13.117306] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:00.658 18:30:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:00.658 18:30:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:09:00.658 18:30:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:00.658 18:30:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.658 18:30:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:00.658 18:30:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.658 18:30:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:00.658 [2024-05-16 18:30:14.105808] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.658 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:01.225 Malloc0 00:09:01.225 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:01.225 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.483 18:30:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.742 [2024-05-16 18:30:15.183469] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:01.742 [2024-05-16 18:30:15.183832] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.742 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:02.000 [2024-05-16 18:30:15.432114] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:02.000 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:02.259 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:02.259 18:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.259 18:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:09:02.259 18:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.259 18:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:02.259 18:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:04.788 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67253 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:04.789 18:30:17 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:04.789 [global] 00:09:04.789 thread=1 00:09:04.789 invalidate=1 00:09:04.789 rw=randrw 00:09:04.789 time_based=1 00:09:04.789 runtime=6 00:09:04.789 ioengine=libaio 00:09:04.789 direct=1 00:09:04.789 bs=4096 00:09:04.789 iodepth=128 00:09:04.789 norandommap=0 00:09:04.789 numjobs=1 00:09:04.789 00:09:04.789 verify_dump=1 00:09:04.789 verify_backlog=512 00:09:04.789 verify_state_save=0 00:09:04.789 do_verify=1 00:09:04.789 verify=crc32c-intel 00:09:04.789 [job0] 00:09:04.789 filename=/dev/nvme0n1 00:09:04.789 Could not set queue depth (nvme0n1) 00:09:04.789 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.789 fio-3.35 00:09:04.789 Starting 1 thread 00:09:05.357 18:30:18 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:05.616 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:05.875 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:05.875 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:05.876 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:06.135 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:06.393 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:06.394 18:30:19 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67253 00:09:10.585 00:09:10.585 job0: (groupid=0, jobs=1): err= 0: pid=67280: Thu May 16 18:30:24 2024 00:09:10.585 read: IOPS=9575, BW=37.4MiB/s (39.2MB/s)(225MiB/6007msec) 00:09:10.585 slat (usec): min=4, max=8475, avg=62.34, stdev=252.36 00:09:10.585 clat (usec): min=1761, max=18368, avg=9210.39, stdev=1736.20 00:09:10.585 lat (usec): min=1775, max=18400, avg=9272.73, stdev=1741.57 00:09:10.585 clat percentiles (usec): 00:09:10.585 | 1.00th=[ 4752], 5.00th=[ 6783], 10.00th=[ 7701], 20.00th=[ 8291], 00:09:10.585 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:09:10.585 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[11338], 95.00th=[13173], 00:09:10.585 | 99.00th=[14615], 99.50th=[15270], 99.90th=[16188], 99.95th=[16909], 00:09:10.585 | 99.99th=[16909] 00:09:10.585 bw ( KiB/s): min=11344, max=23760, per=49.95%, avg=19131.18, stdev=4392.85, samples=11 00:09:10.585 iops : min= 2836, max= 5940, avg=4782.73, stdev=1098.15, samples=11 00:09:10.585 write: IOPS=5436, BW=21.2MiB/s (22.3MB/s)(114MiB/5367msec); 0 zone resets 00:09:10.585 slat (usec): min=13, max=5928, avg=71.39, stdev=179.61 00:09:10.585 clat (usec): min=2308, max=16997, avg=7957.82, stdev=1581.86 00:09:10.585 lat (usec): min=2326, max=17023, avg=8029.21, stdev=1587.26 00:09:10.585 clat percentiles (usec): 00:09:10.585 | 1.00th=[ 3458], 5.00th=[ 4555], 10.00th=[ 5800], 20.00th=[ 7242], 00:09:10.585 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:09:10.585 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10159], 00:09:10.585 | 99.00th=[12649], 99.50th=[13435], 99.90th=[14484], 99.95th=[15139], 00:09:10.585 | 99.99th=[16581] 00:09:10.585 bw ( KiB/s): min=11832, max=23272, per=88.09%, avg=19158.73, stdev=4069.89, samples=11 00:09:10.585 iops : min= 2958, max= 5818, avg=4789.55, stdev=1017.54, samples=11 00:09:10.585 lat (msec) : 2=0.01%, 4=1.07%, 10=84.97%, 20=13.95% 00:09:10.585 cpu : usr=5.61%, sys=20.83%, ctx=5098, majf=0, minf=96 00:09:10.585 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:09:10.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.585 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.585 issued rwts: total=57520,29180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.585 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.585 00:09:10.585 Run status group 0 (all jobs): 00:09:10.585 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=225MiB (236MB), run=6007-6007msec 00:09:10.585 WRITE: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=114MiB (120MB), run=5367-5367msec 00:09:10.585 00:09:10.585 Disk stats (read/write): 00:09:10.585 nvme0n1: ios=56644/28612, merge=0/0, ticks=500560/214506, in_queue=715066, util=98.55% 00:09:10.585 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:10.843 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67355 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:11.101 18:30:24 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:11.101 [global] 00:09:11.101 thread=1 00:09:11.101 invalidate=1 00:09:11.101 rw=randrw 00:09:11.101 time_based=1 00:09:11.101 runtime=6 00:09:11.101 ioengine=libaio 00:09:11.101 direct=1 00:09:11.101 bs=4096 00:09:11.101 iodepth=128 00:09:11.101 norandommap=0 00:09:11.101 numjobs=1 00:09:11.101 00:09:11.101 verify_dump=1 00:09:11.101 verify_backlog=512 00:09:11.101 verify_state_save=0 00:09:11.101 do_verify=1 00:09:11.101 verify=crc32c-intel 00:09:11.101 [job0] 00:09:11.101 filename=/dev/nvme0n1 00:09:11.101 Could not set queue depth (nvme0n1) 00:09:11.359 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.359 fio-3.35 00:09:11.359 Starting 1 thread 00:09:12.296 18:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:12.555 18:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:12.814 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:13.073 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:13.332 18:30:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67355 00:09:17.564 00:09:17.564 job0: (groupid=0, jobs=1): err= 0: pid=67381: Thu May 16 18:30:30 2024 00:09:17.564 read: IOPS=10.4k, BW=40.6MiB/s (42.6MB/s)(244MiB/6007msec) 00:09:17.564 slat (usec): min=6, max=6337, avg=48.18, stdev=204.31 00:09:17.564 clat (usec): min=398, max=18106, avg=8431.11, stdev=2182.99 00:09:17.564 lat (usec): min=410, max=18116, avg=8479.29, stdev=2196.98 00:09:17.564 clat percentiles (usec): 00:09:17.564 | 1.00th=[ 2540], 5.00th=[ 4228], 10.00th=[ 5342], 20.00th=[ 6849], 00:09:17.564 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9110], 00:09:17.564 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10290], 95.00th=[12256], 00:09:17.564 | 99.00th=[14091], 99.50th=[14353], 99.90th=[15533], 99.95th=[16450], 00:09:17.564 | 99.99th=[17433] 00:09:17.564 bw ( KiB/s): min= 9480, max=34016, per=52.86%, avg=22002.67, stdev=7147.16, samples=12 00:09:17.564 iops : min= 2370, max= 8504, avg=5500.67, stdev=1786.79, samples=12 00:09:17.564 write: IOPS=6249, BW=24.4MiB/s (25.6MB/s)(129MiB/5297msec); 0 zone resets 00:09:17.564 slat (usec): min=12, max=2169, avg=57.87, stdev=147.42 00:09:17.564 clat (usec): min=705, max=16647, avg=7120.65, stdev=1933.44 00:09:17.564 lat (usec): min=732, max=16672, avg=7178.52, stdev=1950.48 00:09:17.564 clat percentiles (usec): 00:09:17.564 | 1.00th=[ 2671], 5.00th=[ 3621], 10.00th=[ 4228], 20.00th=[ 5080], 00:09:17.564 | 30.00th=[ 6128], 40.00th=[ 7308], 50.00th=[ 7767], 60.00th=[ 8094], 00:09:17.564 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:09:17.564 | 99.00th=[11731], 99.50th=[12649], 99.90th=[14353], 99.95th=[15008], 00:09:17.564 | 99.99th=[16581] 00:09:17.564 bw ( KiB/s): min= 9728, max=33696, per=88.13%, avg=22028.00, stdev=7018.48, samples=12 00:09:17.564 iops : min= 2432, max= 8424, avg=5507.00, stdev=1754.62, samples=12 00:09:17.564 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:09:17.564 lat (msec) : 2=0.41%, 4=5.18%, 10=85.57%, 20=8.81% 00:09:17.564 cpu : usr=5.31%, sys=22.76%, ctx=5506, majf=0, minf=96 00:09:17.564 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.564 issued rwts: total=62503,33101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.564 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.564 00:09:17.564 Run status group 0 (all jobs): 00:09:17.564 READ: bw=40.6MiB/s (42.6MB/s), 40.6MiB/s-40.6MiB/s (42.6MB/s-42.6MB/s), io=244MiB (256MB), run=6007-6007msec 00:09:17.564 WRITE: bw=24.4MiB/s (25.6MB/s), 24.4MiB/s-24.4MiB/s (25.6MB/s-25.6MB/s), io=129MiB (136MB), run=5297-5297msec 00:09:17.564 00:09:17.564 Disk stats (read/write): 00:09:17.564 nvme0n1: ios=61755/32503, merge=0/0, ticks=500619/217327, in_queue=717946, util=98.66% 00:09:17.564 18:30:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:17.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:17.564 18:30:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:17.564 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:09:17.564 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.564 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:17.564 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:17.564 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.869 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.869 rmmod nvme_tcp 00:09:17.869 rmmod nvme_fabrics 00:09:18.128 rmmod nvme_keyring 00:09:18.128 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.128 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:18.128 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:18.128 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67163 ']' 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67163 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 67163 ']' 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 67163 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67163 00:09:18.129 killing process with pid 67163 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67163' 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 67163 00:09:18.129 [2024-05-16 18:30:31.428290] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:18.129 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 67163 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:18.389 00:09:18.389 real 0m19.510s 00:09:18.389 user 1m14.243s 00:09:18.389 sys 0m8.402s 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:18.389 ************************************ 00:09:18.389 END TEST nvmf_target_multipath 00:09:18.389 18:30:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:18.389 ************************************ 00:09:18.389 18:30:31 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:18.390 18:30:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:18.390 18:30:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:18.390 18:30:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:18.390 ************************************ 00:09:18.390 START TEST nvmf_zcopy 00:09:18.390 ************************************ 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:18.390 * Looking for test storage... 00:09:18.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:18.390 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:18.650 Cannot find device "nvmf_tgt_br" 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:18.650 Cannot find device "nvmf_tgt_br2" 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:18.650 Cannot find device "nvmf_tgt_br" 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:18.650 Cannot find device "nvmf_tgt_br2" 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:18.650 18:30:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:18.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:18.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:18.650 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:18.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:18.909 00:09:18.909 --- 10.0.0.2 ping statistics --- 00:09:18.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.909 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:18.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:18.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:09:18.909 00:09:18.909 --- 10.0.0.3 ping statistics --- 00:09:18.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.909 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:18.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:09:18.909 00:09:18.909 --- 10.0.0.1 ping statistics --- 00:09:18.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.909 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67634 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67634 00:09:18.909 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 67634 ']' 00:09:18.910 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.910 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:18.910 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.910 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:18.910 18:30:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.910 [2024-05-16 18:30:32.328276] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:18.910 [2024-05-16 18:30:32.328598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.169 [2024-05-16 18:30:32.472108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.169 [2024-05-16 18:30:32.631898] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.169 [2024-05-16 18:30:32.632256] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.169 [2024-05-16 18:30:32.632497] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.169 [2024-05-16 18:30:32.632892] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.169 [2024-05-16 18:30:32.632916] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.169 [2024-05-16 18:30:32.632954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.428 [2024-05-16 18:30:32.711933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:19.996 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:19.996 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:09:19.996 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 [2024-05-16 18:30:33.387248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 [2024-05-16 18:30:33.403129] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:19.997 [2024-05-16 18:30:33.403407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 malloc0 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:19.997 { 00:09:19.997 "params": { 00:09:19.997 "name": "Nvme$subsystem", 00:09:19.997 "trtype": "$TEST_TRANSPORT", 00:09:19.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.997 "adrfam": "ipv4", 00:09:19.997 "trsvcid": "$NVMF_PORT", 00:09:19.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.997 "hdgst": ${hdgst:-false}, 00:09:19.997 "ddgst": ${ddgst:-false} 00:09:19.997 }, 00:09:19.997 "method": "bdev_nvme_attach_controller" 00:09:19.997 } 00:09:19.997 EOF 00:09:19.997 )") 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:19.997 18:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:19.997 "params": { 00:09:19.997 "name": "Nvme1", 00:09:19.997 "trtype": "tcp", 00:09:19.997 "traddr": "10.0.0.2", 00:09:19.997 "adrfam": "ipv4", 00:09:19.997 "trsvcid": "4420", 00:09:19.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.997 "hdgst": false, 00:09:19.997 "ddgst": false 00:09:19.997 }, 00:09:19.997 "method": "bdev_nvme_attach_controller" 00:09:19.997 }' 00:09:20.256 [2024-05-16 18:30:33.499641] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:20.256 [2024-05-16 18:30:33.499730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67668 ] 00:09:20.256 [2024-05-16 18:30:33.639568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.256 [2024-05-16 18:30:33.748315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.515 [2024-05-16 18:30:33.815485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:20.515 Running I/O for 10 seconds... 00:09:30.532 00:09:30.532 Latency(us) 00:09:30.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.532 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:30.532 Verification LBA range: start 0x0 length 0x1000 00:09:30.532 Nvme1n1 : 10.02 5722.37 44.71 0.00 0.00 22297.65 2859.75 34555.35 00:09:30.532 =================================================================================================================== 00:09:30.532 Total : 5722.37 44.71 0.00 0.00 22297.65 2859.75 34555.35 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67779 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:30.791 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:30.791 { 00:09:30.792 "params": { 00:09:30.792 "name": "Nvme$subsystem", 00:09:30.792 "trtype": "$TEST_TRANSPORT", 00:09:30.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.792 "adrfam": "ipv4", 00:09:30.792 "trsvcid": "$NVMF_PORT", 00:09:30.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.792 "hdgst": ${hdgst:-false}, 00:09:30.792 "ddgst": ${ddgst:-false} 00:09:30.792 }, 00:09:30.792 "method": "bdev_nvme_attach_controller" 00:09:30.792 } 00:09:30.792 EOF 00:09:30.792 )") 00:09:30.792 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:30.792 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:30.792 [2024-05-16 18:30:44.263909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.792 [2024-05-16 18:30:44.263984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.792 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:30.792 18:30:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:30.792 "params": { 00:09:30.792 "name": "Nvme1", 00:09:30.792 "trtype": "tcp", 00:09:30.792 "traddr": "10.0.0.2", 00:09:30.792 "adrfam": "ipv4", 00:09:30.792 "trsvcid": "4420", 00:09:30.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.792 "hdgst": false, 00:09:30.792 "ddgst": false 00:09:30.792 }, 00:09:30.792 "method": "bdev_nvme_attach_controller" 00:09:30.792 }' 00:09:30.792 [2024-05-16 18:30:44.275847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.792 [2024-05-16 18:30:44.275893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.792 [2024-05-16 18:30:44.287825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.792 [2024-05-16 18:30:44.287865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.299829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.299881] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.311828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.311891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.318364] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:31.050 [2024-05-16 18:30:44.318490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67779 ] 00:09:31.050 [2024-05-16 18:30:44.323839] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.323878] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.335835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.335870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.347835] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.347871] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.359895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.359932] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.371858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.371892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.383859] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.383891] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.395844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.395877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.407846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.407880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.419862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.050 [2024-05-16 18:30:44.419928] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.050 [2024-05-16 18:30:44.431853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.431889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.443865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.443899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.455870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.455908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.460177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.051 [2024-05-16 18:30:44.467931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.467965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.479914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.479967] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.491909] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.491944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.503898] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.503934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.515966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.516000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.527902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.527926] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.051 [2024-05-16 18:30:44.539963] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.051 [2024-05-16 18:30:44.539991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.309 [2024-05-16 18:30:44.551931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.309 [2024-05-16 18:30:44.551996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.309 [2024-05-16 18:30:44.563919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.309 [2024-05-16 18:30:44.563942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.309 [2024-05-16 18:30:44.575911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.309 [2024-05-16 18:30:44.575942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.309 [2024-05-16 18:30:44.587915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.587945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.591961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.310 [2024-05-16 18:30:44.599940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.599965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.611953] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.611980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.623971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.624008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.635960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.635996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.647960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.647989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.659966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.659997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.671975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.672003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.675901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:31.310 [2024-05-16 18:30:44.683981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.684007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.695990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.696017] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.707985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.708010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.720009] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.720034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.732025] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.732062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.744021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.744051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.756031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.756067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.768034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.768062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.780055] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.780091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 [2024-05-16 18:30:44.792062] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.792093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.310 Running I/O for 5 seconds... 00:09:31.310 [2024-05-16 18:30:44.804077] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.310 [2024-05-16 18:30:44.804104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.822545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.822580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.837729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.837761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.856225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.856260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.871747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.871779] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.881445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.881479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.896419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.896454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.912715] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.912746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.931203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.931238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.946305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.946365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.956008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.956038] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.972262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.972295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:44.988799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:44.988843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:45.006154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:45.006196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:45.021846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:45.021889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:45.031730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:45.031762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:45.047451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:45.047486] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.569 [2024-05-16 18:30:45.066401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.569 [2024-05-16 18:30:45.066435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.080592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.080624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.095312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.095343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.111745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.111775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.127298] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.127332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.137564] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.137610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.152617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.152653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.169844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.169877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.186446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.186481] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.203911] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.203945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.219947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.219980] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.236608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.236643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.252818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.828 [2024-05-16 18:30:45.252860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.828 [2024-05-16 18:30:45.270068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.829 [2024-05-16 18:30:45.270098] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.829 [2024-05-16 18:30:45.285235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.829 [2024-05-16 18:30:45.285281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.829 [2024-05-16 18:30:45.300571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.829 [2024-05-16 18:30:45.300608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.829 [2024-05-16 18:30:45.309497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.829 [2024-05-16 18:30:45.309529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.829 [2024-05-16 18:30:45.325443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.829 [2024-05-16 18:30:45.325478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.087 [2024-05-16 18:30:45.336498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.087 [2024-05-16 18:30:45.336532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.087 [2024-05-16 18:30:45.350805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.350870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.367450] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.367483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.383663] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.383697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.399661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.399694] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.416029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.416074] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.433705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.433738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.449743] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.449781] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.467813] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.467859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.482718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.482753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.493302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.493339] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.504882] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.504916] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.520076] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.520107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.535764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.535797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.545578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.545650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.560673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.560710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.088 [2024-05-16 18:30:45.576479] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.088 [2024-05-16 18:30:45.576514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.346 [2024-05-16 18:30:45.593730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.593764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.611265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.611300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.626695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.626743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.636575] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.636608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.652310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.652345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.669476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.669513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.685642] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.685674] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.703207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.703241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.719698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.719731] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.736293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.736328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.754038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.754069] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.768829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.768874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.784424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.784459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.802541] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.802575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.817553] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.817615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.827507] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.827541] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.347 [2024-05-16 18:30:45.842579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.347 [2024-05-16 18:30:45.842614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.858519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.858554] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.873913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.873945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.883441] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.883475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.899324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.899358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.916208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.916242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.932928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.932971] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.950088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.950120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.968469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.968511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.983267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.983301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:45.992748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:45.992780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:46.008577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:46.008608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:46.025258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:46.025301] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:46.042747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:46.042780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:46.058573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:46.058608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:46.075516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:46.075583] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.606 [2024-05-16 18:30:46.090788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.606 [2024-05-16 18:30:46.090835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.110071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.110104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.124539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.124573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.136625] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.136658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.152233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.152268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.170237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.170270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.184897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.184930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.194872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.194905] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.210692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.210728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.219845] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.219877] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.236374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.236410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.253960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.253991] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.268942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.268974] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.284217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.284253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.302832] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.302865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.317751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.317794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.332760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.332794] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.341935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.341966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.865 [2024-05-16 18:30:46.358751] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.865 [2024-05-16 18:30:46.358785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.375591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.375643] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.391935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.391969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.408158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.408220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.425648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.425681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.443527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.443590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.458099] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.458149] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.473314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.473350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.483279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.483314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.499285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.499322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.515035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.515067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.524858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.524900] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.541428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.541462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.558278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.558327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.573670] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.573704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.582749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.582798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.598761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.598798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.609286] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.609327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.124 [2024-05-16 18:30:46.624414] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.124 [2024-05-16 18:30:46.624451] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.382 [2024-05-16 18:30:46.640952] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.640989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.659084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.659120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.673774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.673812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.691456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.691494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.706527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.706567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.724737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.724775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.739501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.739538] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.749318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.749354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.765377] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.765413] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.781299] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.781336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.799973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.800005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.814987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.815022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.824984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.825016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.840406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.840440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.857659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.857692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.383 [2024-05-16 18:30:46.874216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.383 [2024-05-16 18:30:46.874249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.890797] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.890857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.907656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.907690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.924370] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.924405] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.940358] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.940394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.958145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.958183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.973191] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.973227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.982518] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.982552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:46.999446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:46.999483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.015731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.015771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.033989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.034022] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.049054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.049087] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.066495] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.066531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.081307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.081342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.097613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.097647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.113593] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.113627] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.642 [2024-05-16 18:30:47.131263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.642 [2024-05-16 18:30:47.131297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.900 [2024-05-16 18:30:47.147336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.900 [2024-05-16 18:30:47.147371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.900 [2024-05-16 18:30:47.164754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.900 [2024-05-16 18:30:47.164788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.900 [2024-05-16 18:30:47.181578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.900 [2024-05-16 18:30:47.181613] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.900 [2024-05-16 18:30:47.198114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.900 [2024-05-16 18:30:47.198147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.900 [2024-05-16 18:30:47.214703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.900 [2024-05-16 18:30:47.214738] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.900 [2024-05-16 18:30:47.231520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.900 [2024-05-16 18:30:47.231559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.249047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.249081] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.264914] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.264947] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.283972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.284005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.298528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.298562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.317702] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.317737] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.331792] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.331840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.347969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.348003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.365591] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.365625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.379617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.379651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.901 [2024-05-16 18:30:47.395315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.901 [2024-05-16 18:30:47.395350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.414134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.414171] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.429913] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.429949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.447208] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.447244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.462665] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.462714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.478471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.478506] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.494116] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.494153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.503748] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.503783] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.519452] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.519487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.537256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.537293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.551846] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.551892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.561755] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.561805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.578067] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.578099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.595113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.595148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.612338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.612373] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.628385] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.628429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.160 [2024-05-16 18:30:47.646624] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.160 [2024-05-16 18:30:47.646662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.418 [2024-05-16 18:30:47.661504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.661539] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.679154] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.679203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.693803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.693849] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.709315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.709352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.719123] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.719161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.733633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.733669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.748463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.748500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.758674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.758710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.774933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.774969] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.791653] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.791691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.807168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.807220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.822506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.822544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.842117] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.842168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.856690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.856732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.865795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.865844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.882352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.882391] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.900272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.900308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.419 [2024-05-16 18:30:47.917213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.419 [2024-05-16 18:30:47.917249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:47.933435] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:47.933471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:47.951057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:47.951096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:47.967034] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:47.967068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:47.984004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:47.984045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.000027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.000064] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.017194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.017233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.034241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.034276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.049739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.049774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.065463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.065500] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.082552] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.082588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.098962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.098996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.115787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.115830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.133309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.133344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.147937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.147984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.677 [2024-05-16 18:30:48.164311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.677 [2024-05-16 18:30:48.164351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.183337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.183372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.196930] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.196966] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.213056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.213095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.229308] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.229353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.246788] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.246838] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.262122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.262161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.279675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.279714] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.294497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.294534] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.312059] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.312095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.327497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.327532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.345316] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.345355] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.359856] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.359904] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.377119] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.377157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.393734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.393768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.409672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.409711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.936 [2024-05-16 18:30:48.426085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.936 [2024-05-16 18:30:48.426128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.444492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.444532] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.459923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.459957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.476950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.476987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.493124] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.493168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.510401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.510439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.527713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.527750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.542673] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.542725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.558615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.558652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.576152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.576190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.592408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.592449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.610236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.610276] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.625082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.625120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.640421] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.640461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.650161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.650197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.665612] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.665652] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.683433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.683489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.698754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.698795] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.212 [2024-05-16 18:30:48.708433] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.212 [2024-05-16 18:30:48.708468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.724386] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.724422] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.741283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.741318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.756767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.756804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.775883] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.775915] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.791717] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.791754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.808489] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.808523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.825314] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.825348] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.841931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.841965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.860230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.860266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.875234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.875269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.885024] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.885059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.901535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.901572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.918901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.918938] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.933539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.933575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.949613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.949648] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.472 [2024-05-16 18:30:48.967724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.472 [2024-05-16 18:30:48.967759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:48.982517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:48.982552] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:48.992083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:48.992120] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.008267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.008318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.025614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.025664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.042679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.042719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.058893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.058940] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.077939] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.077977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.092517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.092557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.104787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.104837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.119971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.120005] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.129447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.129483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.146125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.146161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.163339] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.163376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.178773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.178809] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.194604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.194641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.211381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.211415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.731 [2024-05-16 18:30:49.229724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.731 [2024-05-16 18:30:49.229760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.244711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.244745] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.262573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.262609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.277442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.277478] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.294990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.295023] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.309944] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.309978] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.319754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.319787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.336589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.336622] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.353468] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.353503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.369780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.369843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.386477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.386511] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.401893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.401927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.417365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.417401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.434958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.435003] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.452284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.452320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.468382] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.468417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:35.991 [2024-05-16 18:30:49.486237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:35.991 [2024-05-16 18:30:49.486271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.250 [2024-05-16 18:30:49.499699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.250 [2024-05-16 18:30:49.499734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.250 [2024-05-16 18:30:49.517216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.250 [2024-05-16 18:30:49.517253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.250 [2024-05-16 18:30:49.532006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.250 [2024-05-16 18:30:49.532040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.250 [2024-05-16 18:30:49.548018] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.250 [2024-05-16 18:30:49.548051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.565234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.565270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.581462] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.581497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.598516] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.598553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.615477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.615522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.633873] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.633908] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.648700] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.648735] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.658152] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.658201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.674277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.674312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.691178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.691217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.707604] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.707667] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.724173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.724206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.251 [2024-05-16 18:30:49.740815] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.251 [2024-05-16 18:30:49.740895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.760343] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.760378] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.775903] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.776004] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.794511] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.794546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.808837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.808895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 00:09:36.510 Latency(us) 00:09:36.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.510 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:36.510 Nvme1n1 : 5.01 11608.15 90.69 0.00 0.00 11010.74 4766.25 20494.89 00:09:36.510 =================================================================================================================== 00:09:36.510 Total : 11608.15 90.69 0.00 0.00 11010.74 4766.25 20494.89 00:09:36.510 [2024-05-16 18:30:49.818093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.818130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.830058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.830089] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.842088] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.842130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.854113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.854204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.866091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.866203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.878090] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.878116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.890096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.890135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.510 [2024-05-16 18:30:49.902086] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.510 [2024-05-16 18:30:49.902128] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.914091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.914116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.926093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.926119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.938101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.938159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.950096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.950147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.962120] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.962162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.974107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.974148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.986106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.986147] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:49.998115] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:49.998157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.511 [2024-05-16 18:30:50.010118] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.511 [2024-05-16 18:30:50.010161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 [2024-05-16 18:30:50.022122] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.770 [2024-05-16 18:30:50.022164] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 [2024-05-16 18:30:50.034137] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.770 [2024-05-16 18:30:50.034192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 [2024-05-16 18:30:50.046142] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.770 [2024-05-16 18:30:50.046178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 [2024-05-16 18:30:50.058132] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.770 [2024-05-16 18:30:50.058157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 [2024-05-16 18:30:50.070166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.770 [2024-05-16 18:30:50.070194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 [2024-05-16 18:30:50.082166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.770 [2024-05-16 18:30:50.082193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 [2024-05-16 18:30:50.094185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:36.770 [2024-05-16 18:30:50.094225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:36.770 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67779) - No such process 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67779 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.770 delay0 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.770 18:30:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:37.029 [2024-05-16 18:30:50.302035] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:43.606 Initializing NVMe Controllers 00:09:43.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:43.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:43.606 Initialization complete. Launching workers. 00:09:43.606 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 345 00:09:43.606 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 632, failed to submit 33 00:09:43.606 success 507, unsuccess 125, failed 0 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.606 rmmod nvme_tcp 00:09:43.606 rmmod nvme_fabrics 00:09:43.606 rmmod nvme_keyring 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67634 ']' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67634 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 67634 ']' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 67634 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67634 00:09:43.606 killing process with pid 67634 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67634' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 67634 00:09:43.606 [2024-05-16 18:30:56.527602] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 67634 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:43.606 00:09:43.606 real 0m25.121s 00:09:43.606 user 0m40.911s 00:09:43.606 sys 0m7.201s 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:43.606 ************************************ 00:09:43.606 END TEST nvmf_zcopy 00:09:43.606 ************************************ 00:09:43.606 18:30:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.606 18:30:56 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:43.606 18:30:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:43.606 18:30:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:43.606 18:30:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.606 ************************************ 00:09:43.606 START TEST nvmf_nmic 00:09:43.606 ************************************ 00:09:43.606 18:30:56 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:43.606 * Looking for test storage... 00:09:43.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.606 18:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:43.607 Cannot find device "nvmf_tgt_br" 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:43.607 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:43.866 Cannot find device "nvmf_tgt_br2" 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:43.866 Cannot find device "nvmf_tgt_br" 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:43.866 Cannot find device "nvmf_tgt_br2" 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:43.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:43.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:43.866 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.125 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.125 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:44.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:09:44.126 00:09:44.126 --- 10.0.0.2 ping statistics --- 00:09:44.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.126 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:44.126 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.126 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:09:44.126 00:09:44.126 --- 10.0.0.3 ping statistics --- 00:09:44.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.126 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:44.126 00:09:44.126 --- 10.0.0.1 ping statistics --- 00:09:44.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.126 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68104 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68104 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 68104 ']' 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:44.126 18:30:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:44.126 [2024-05-16 18:30:57.495179] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:44.126 [2024-05-16 18:30:57.495785] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.385 [2024-05-16 18:30:57.634435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.385 [2024-05-16 18:30:57.772589] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.385 [2024-05-16 18:30:57.772677] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.385 [2024-05-16 18:30:57.772691] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.385 [2024-05-16 18:30:57.772702] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.385 [2024-05-16 18:30:57.772711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.385 [2024-05-16 18:30:57.773109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.385 [2024-05-16 18:30:57.773612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.385 [2024-05-16 18:30:57.773725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.385 [2024-05-16 18:30:57.774136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.385 [2024-05-16 18:30:57.839342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 [2024-05-16 18:30:58.509474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 Malloc0 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 [2024-05-16 18:30:58.588955] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:45.323 [2024-05-16 18:30:58.589535] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:45.323 test case1: single bdev can't be used in multiple subsystems 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.323 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.323 [2024-05-16 18:30:58.613039] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:45.323 [2024-05-16 18:30:58.613083] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:45.323 [2024-05-16 18:30:58.613096] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.323 request: 00:09:45.323 { 00:09:45.323 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:45.323 "namespace": { 00:09:45.323 "bdev_name": "Malloc0", 00:09:45.323 "no_auto_visible": false 00:09:45.323 }, 00:09:45.323 "method": "nvmf_subsystem_add_ns", 00:09:45.323 "req_id": 1 00:09:45.323 } 00:09:45.323 Got JSON-RPC error response 00:09:45.323 response: 00:09:45.323 { 00:09:45.323 "code": -32602, 00:09:45.323 "message": "Invalid parameters" 00:09:45.323 } 00:09:45.323 Adding namespace failed - expected result. 00:09:45.323 test case2: host connect to nvmf target in multiple paths 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:45.324 [2024-05-16 18:30:58.625207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:45.324 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:45.583 18:30:58 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.583 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:09:45.583 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.583 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:45.583 18:30:58 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:09:47.489 18:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:47.489 18:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:47.489 18:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:47.489 18:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:47.489 18:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:47.489 18:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:09:47.489 18:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:47.489 [global] 00:09:47.489 thread=1 00:09:47.489 invalidate=1 00:09:47.489 rw=write 00:09:47.489 time_based=1 00:09:47.489 runtime=1 00:09:47.489 ioengine=libaio 00:09:47.489 direct=1 00:09:47.489 bs=4096 00:09:47.489 iodepth=1 00:09:47.489 norandommap=0 00:09:47.489 numjobs=1 00:09:47.489 00:09:47.489 verify_dump=1 00:09:47.489 verify_backlog=512 00:09:47.490 verify_state_save=0 00:09:47.490 do_verify=1 00:09:47.490 verify=crc32c-intel 00:09:47.490 [job0] 00:09:47.490 filename=/dev/nvme0n1 00:09:47.490 Could not set queue depth (nvme0n1) 00:09:47.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.749 fio-3.35 00:09:47.749 Starting 1 thread 00:09:49.129 00:09:49.129 job0: (groupid=0, jobs=1): err= 0: pid=68201: Thu May 16 18:31:02 2024 00:09:49.129 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:49.129 slat (nsec): min=12321, max=72681, avg=16388.59, stdev=5356.66 00:09:49.129 clat (usec): min=142, max=747, avg=212.86, stdev=34.92 00:09:49.129 lat (usec): min=158, max=781, avg=229.25, stdev=35.53 00:09:49.129 clat percentiles (usec): 00:09:49.129 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 178], 20.00th=[ 188], 00:09:49.129 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:09:49.129 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 262], 00:09:49.129 | 99.00th=[ 293], 99.50th=[ 379], 99.90th=[ 627], 99.95th=[ 734], 00:09:49.129 | 99.99th=[ 750] 00:09:49.129 write: IOPS=2633, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:09:49.129 slat (usec): min=15, max=125, avg=24.53, stdev= 8.46 00:09:49.129 clat (usec): min=86, max=398, avg=128.62, stdev=23.75 00:09:49.129 lat (usec): min=107, max=506, avg=153.15, stdev=26.08 00:09:49.129 clat percentiles (usec): 00:09:49.129 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 102], 20.00th=[ 109], 00:09:49.129 | 30.00th=[ 115], 40.00th=[ 121], 50.00th=[ 126], 60.00th=[ 133], 00:09:49.129 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 159], 95.00th=[ 169], 00:09:49.129 | 99.00th=[ 190], 99.50th=[ 202], 99.90th=[ 388], 99.95th=[ 392], 00:09:49.129 | 99.99th=[ 400] 00:09:49.129 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:09:49.129 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:49.129 lat (usec) : 100=3.35%, 250=92.22%, 500=4.35%, 750=0.08% 00:09:49.129 cpu : usr=2.30%, sys=7.80%, ctx=5196, majf=0, minf=2 00:09:49.129 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:49.129 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.129 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.129 issued rwts: total=2560,2636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.129 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:49.129 00:09:49.129 Run status group 0 (all jobs): 00:09:49.129 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:49.129 WRITE: bw=10.3MiB/s (10.8MB/s), 10.3MiB/s-10.3MiB/s (10.8MB/s-10.8MB/s), io=10.3MiB (10.8MB), run=1001-1001msec 00:09:49.129 00:09:49.129 Disk stats (read/write): 00:09:49.129 nvme0n1: ios=2221/2560, merge=0/0, ticks=498/380, in_queue=878, util=91.58% 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.129 rmmod nvme_tcp 00:09:49.129 rmmod nvme_fabrics 00:09:49.129 rmmod nvme_keyring 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68104 ']' 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68104 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 68104 ']' 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 68104 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68104 00:09:49.129 killing process with pid 68104 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68104' 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 68104 00:09:49.129 [2024-05-16 18:31:02.435464] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:49.129 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 68104 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:49.388 00:09:49.388 real 0m5.907s 00:09:49.388 user 0m18.911s 00:09:49.388 sys 0m1.958s 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:49.388 ************************************ 00:09:49.388 END TEST nvmf_nmic 00:09:49.388 18:31:02 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:49.388 ************************************ 00:09:49.648 18:31:02 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:49.648 18:31:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:49.648 18:31:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:49.648 18:31:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.648 ************************************ 00:09:49.648 START TEST nvmf_fio_target 00:09:49.648 ************************************ 00:09:49.648 18:31:02 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:49.648 * Looking for test storage... 00:09:49.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:49.648 18:31:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:49.648 18:31:02 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:49.648 Cannot find device "nvmf_tgt_br" 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.648 Cannot find device "nvmf_tgt_br2" 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:49.648 Cannot find device "nvmf_tgt_br" 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:09:49.648 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:49.648 Cannot find device "nvmf_tgt_br2" 00:09:49.649 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:09:49.649 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:49.649 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:49.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:09:49.908 00:09:49.908 --- 10.0.0.2 ping statistics --- 00:09:49.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.908 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:49.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:49.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:09:49.908 00:09:49.908 --- 10.0.0.3 ping statistics --- 00:09:49.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.908 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:49.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:09:49.908 00:09:49.908 --- 10.0.0.1 ping statistics --- 00:09:49.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.908 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.908 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68379 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68379 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 68379 ']' 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:49.909 18:31:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.167 [2024-05-16 18:31:03.444273] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:09:50.167 [2024-05-16 18:31:03.444795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.167 [2024-05-16 18:31:03.578695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.426 [2024-05-16 18:31:03.690013] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.426 [2024-05-16 18:31:03.690076] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.426 [2024-05-16 18:31:03.690102] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.426 [2024-05-16 18:31:03.690110] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.426 [2024-05-16 18:31:03.690117] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.426 [2024-05-16 18:31:03.690294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.426 [2024-05-16 18:31:03.690907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.426 [2024-05-16 18:31:03.691107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.426 [2024-05-16 18:31:03.691113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.426 [2024-05-16 18:31:03.746018] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:50.998 18:31:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:50.998 18:31:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:09:50.998 18:31:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.998 18:31:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.998 18:31:04 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.257 18:31:04 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.257 18:31:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.516 [2024-05-16 18:31:04.758207] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.516 18:31:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.775 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:51.775 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.034 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:52.034 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.293 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:52.293 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.551 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:52.551 18:31:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:52.810 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.069 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:53.069 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.328 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:53.328 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:53.587 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:53.587 18:31:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:53.846 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:54.105 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:54.105 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.364 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:54.364 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:54.623 18:31:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.882 [2024-05-16 18:31:08.128586] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:54.882 [2024-05-16 18:31:08.128943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.882 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:55.141 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:55.400 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.400 18:31:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:55.400 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:09:55.400 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.400 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:09:55.400 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:09:55.400 18:31:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:09:57.354 18:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:57.354 18:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.354 18:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:57.354 18:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:09:57.354 18:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.354 18:31:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:09:57.354 18:31:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:57.354 [global] 00:09:57.354 thread=1 00:09:57.354 invalidate=1 00:09:57.354 rw=write 00:09:57.354 time_based=1 00:09:57.354 runtime=1 00:09:57.354 ioengine=libaio 00:09:57.354 direct=1 00:09:57.354 bs=4096 00:09:57.354 iodepth=1 00:09:57.354 norandommap=0 00:09:57.354 numjobs=1 00:09:57.354 00:09:57.354 verify_dump=1 00:09:57.354 verify_backlog=512 00:09:57.354 verify_state_save=0 00:09:57.354 do_verify=1 00:09:57.354 verify=crc32c-intel 00:09:57.354 [job0] 00:09:57.354 filename=/dev/nvme0n1 00:09:57.354 [job1] 00:09:57.354 filename=/dev/nvme0n2 00:09:57.354 [job2] 00:09:57.354 filename=/dev/nvme0n3 00:09:57.613 [job3] 00:09:57.613 filename=/dev/nvme0n4 00:09:57.613 Could not set queue depth (nvme0n1) 00:09:57.613 Could not set queue depth (nvme0n2) 00:09:57.613 Could not set queue depth (nvme0n3) 00:09:57.613 Could not set queue depth (nvme0n4) 00:09:57.613 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.613 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.613 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.613 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:57.613 fio-3.35 00:09:57.613 Starting 4 threads 00:09:58.991 00:09:58.991 job0: (groupid=0, jobs=1): err= 0: pid=68571: Thu May 16 18:31:12 2024 00:09:58.991 read: IOPS=1539, BW=6158KiB/s (6306kB/s)(6164KiB/1001msec) 00:09:58.991 slat (nsec): min=9527, max=33878, avg=12662.02, stdev=3059.09 00:09:58.991 clat (usec): min=193, max=567, avg=290.56, stdev=24.46 00:09:58.991 lat (usec): min=207, max=584, avg=303.22, stdev=24.61 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:09:58.991 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 293], 00:09:58.991 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 334], 00:09:58.991 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 437], 99.95th=[ 570], 00:09:58.991 | 99.99th=[ 570] 00:09:58.991 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:58.991 slat (usec): min=12, max=226, avg=24.99, stdev= 9.54 00:09:58.991 clat (usec): min=154, max=351, avg=232.13, stdev=27.43 00:09:58.991 lat (usec): min=175, max=486, avg=257.12, stdev=31.49 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:09:58.991 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:09:58.991 | 70.00th=[ 243], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 285], 00:09:58.991 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 347], 99.95th=[ 351], 00:09:58.991 | 99.99th=[ 351] 00:09:58.991 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:58.991 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:58.991 lat (usec) : 250=44.78%, 500=55.20%, 750=0.03% 00:09:58.991 cpu : usr=1.50%, sys=5.60%, ctx=3590, majf=0, minf=13 00:09:58.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.991 job1: (groupid=0, jobs=1): err= 0: pid=68572: Thu May 16 18:31:12 2024 00:09:58.991 read: IOPS=1573, BW=6294KiB/s (6445kB/s)(6300KiB/1001msec) 00:09:58.991 slat (nsec): min=8434, max=49725, avg=14434.65, stdev=5650.13 00:09:58.991 clat (usec): min=174, max=583, avg=288.43, stdev=27.14 00:09:58.991 lat (usec): min=213, max=592, avg=302.86, stdev=27.99 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 241], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:09:58.991 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:09:58.991 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 338], 00:09:58.991 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 408], 99.95th=[ 586], 00:09:58.991 | 99.99th=[ 586] 00:09:58.991 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:58.991 slat (nsec): min=10379, max=79753, avg=20471.33, stdev=7657.48 00:09:58.991 clat (usec): min=124, max=385, avg=231.73, stdev=25.97 00:09:58.991 lat (usec): min=170, max=457, avg=252.20, stdev=27.84 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 210], 00:09:58.991 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 235], 00:09:58.991 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 285], 00:09:58.991 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 371], 00:09:58.991 | 99.99th=[ 388] 00:09:58.991 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:58.991 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:58.991 lat (usec) : 250=46.67%, 500=53.30%, 750=0.03% 00:09:58.991 cpu : usr=1.60%, sys=5.40%, ctx=3632, majf=0, minf=5 00:09:58.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 issued rwts: total=1575,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.991 job2: (groupid=0, jobs=1): err= 0: pid=68573: Thu May 16 18:31:12 2024 00:09:58.991 read: IOPS=1537, BW=6150KiB/s (6297kB/s)(6156KiB/1001msec) 00:09:58.991 slat (nsec): min=9693, max=41318, avg=15379.85, stdev=2762.95 00:09:58.991 clat (usec): min=203, max=539, avg=287.45, stdev=24.08 00:09:58.991 lat (usec): min=226, max=556, avg=302.83, stdev=24.04 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:09:58.991 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:09:58.991 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:09:58.991 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 388], 99.95th=[ 537], 00:09:58.991 | 99.99th=[ 537] 00:09:58.991 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:58.991 slat (usec): min=12, max=147, avg=27.24, stdev=11.30 00:09:58.991 clat (usec): min=177, max=384, avg=229.97, stdev=26.20 00:09:58.991 lat (usec): min=199, max=421, avg=257.21, stdev=30.68 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 208], 00:09:58.991 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:09:58.991 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 281], 00:09:58.991 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 363], 99.95th=[ 367], 00:09:58.991 | 99.99th=[ 383] 00:09:58.991 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:58.991 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:58.991 lat (usec) : 250=46.84%, 500=53.14%, 750=0.03% 00:09:58.991 cpu : usr=2.50%, sys=5.90%, ctx=3588, majf=0, minf=9 00:09:58.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 issued rwts: total=1539,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.991 job3: (groupid=0, jobs=1): err= 0: pid=68574: Thu May 16 18:31:12 2024 00:09:58.991 read: IOPS=1573, BW=6294KiB/s (6445kB/s)(6300KiB/1001msec) 00:09:58.991 slat (nsec): min=8333, max=47396, avg=14116.34, stdev=4440.31 00:09:58.991 clat (usec): min=221, max=536, avg=289.17, stdev=27.79 00:09:58.991 lat (usec): min=235, max=551, avg=303.28, stdev=28.56 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 243], 5.00th=[ 251], 10.00th=[ 258], 20.00th=[ 269], 00:09:58.991 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:09:58.991 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 338], 00:09:58.991 | 99.00th=[ 375], 99.50th=[ 392], 99.90th=[ 420], 99.95th=[ 537], 00:09:58.991 | 99.99th=[ 537] 00:09:58.991 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:58.991 slat (usec): min=10, max=137, avg=22.82, stdev= 8.95 00:09:58.991 clat (usec): min=127, max=362, avg=229.06, stdev=25.05 00:09:58.991 lat (usec): min=173, max=423, avg=251.88, stdev=27.32 00:09:58.991 clat percentiles (usec): 00:09:58.991 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:09:58.991 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:09:58.991 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 277], 00:09:58.991 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 359], 00:09:58.991 | 99.99th=[ 363] 00:09:58.991 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:09:58.991 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:58.991 lat (usec) : 250=48.50%, 500=51.48%, 750=0.03% 00:09:58.991 cpu : usr=1.50%, sys=5.90%, ctx=3628, majf=0, minf=8 00:09:58.991 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:58.991 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:58.991 issued rwts: total=1575,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:58.991 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:58.991 00:09:58.991 Run status group 0 (all jobs): 00:09:58.991 READ: bw=24.3MiB/s (25.5MB/s), 6150KiB/s-6294KiB/s (6297kB/s-6445kB/s), io=24.3MiB (25.5MB), run=1001-1001msec 00:09:58.991 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:09:58.991 00:09:58.991 Disk stats (read/write): 00:09:58.991 nvme0n1: ios=1532/1536, merge=0/0, ticks=444/332, in_queue=776, util=86.75% 00:09:58.991 nvme0n2: ios=1506/1536, merge=0/0, ticks=432/333, in_queue=765, util=86.63% 00:09:58.991 nvme0n3: ios=1480/1536, merge=0/0, ticks=425/350, in_queue=775, util=88.98% 00:09:58.991 nvme0n4: ios=1495/1536, merge=0/0, ticks=415/345, in_queue=760, util=89.65% 00:09:58.991 18:31:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:58.991 [global] 00:09:58.991 thread=1 00:09:58.991 invalidate=1 00:09:58.991 rw=randwrite 00:09:58.991 time_based=1 00:09:58.991 runtime=1 00:09:58.991 ioengine=libaio 00:09:58.991 direct=1 00:09:58.991 bs=4096 00:09:58.991 iodepth=1 00:09:58.991 norandommap=0 00:09:58.991 numjobs=1 00:09:58.991 00:09:58.991 verify_dump=1 00:09:58.991 verify_backlog=512 00:09:58.991 verify_state_save=0 00:09:58.991 do_verify=1 00:09:58.991 verify=crc32c-intel 00:09:58.991 [job0] 00:09:58.991 filename=/dev/nvme0n1 00:09:58.991 [job1] 00:09:58.991 filename=/dev/nvme0n2 00:09:58.991 [job2] 00:09:58.991 filename=/dev/nvme0n3 00:09:58.991 [job3] 00:09:58.991 filename=/dev/nvme0n4 00:09:58.991 Could not set queue depth (nvme0n1) 00:09:58.991 Could not set queue depth (nvme0n2) 00:09:58.992 Could not set queue depth (nvme0n3) 00:09:58.992 Could not set queue depth (nvme0n4) 00:09:58.992 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.992 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.992 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.992 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.992 fio-3.35 00:09:58.992 Starting 4 threads 00:10:00.371 00:10:00.371 job0: (groupid=0, jobs=1): err= 0: pid=68628: Thu May 16 18:31:13 2024 00:10:00.371 read: IOPS=1507, BW=6030KiB/s (6175kB/s)(6036KiB/1001msec) 00:10:00.371 slat (usec): min=9, max=103, avg=15.55, stdev= 5.70 00:10:00.371 clat (usec): min=200, max=1449, avg=340.10, stdev=51.18 00:10:00.371 lat (usec): min=216, max=1461, avg=355.65, stdev=51.38 00:10:00.371 clat percentiles (usec): 00:10:00.371 | 1.00th=[ 260], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 302], 00:10:00.371 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 351], 00:10:00.371 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 400], 00:10:00.371 | 99.00th=[ 429], 99.50th=[ 445], 99.90th=[ 807], 99.95th=[ 1450], 00:10:00.371 | 99.99th=[ 1450] 00:10:00.371 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:00.371 slat (usec): min=10, max=538, avg=27.23, stdev=17.27 00:10:00.371 clat (usec): min=138, max=3216, avg=270.38, stdev=88.48 00:10:00.371 lat (usec): min=192, max=3244, avg=297.61, stdev=91.54 00:10:00.371 clat percentiles (usec): 00:10:00.371 | 1.00th=[ 188], 5.00th=[ 208], 10.00th=[ 219], 20.00th=[ 233], 00:10:00.371 | 30.00th=[ 241], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 273], 00:10:00.371 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 347], 00:10:00.371 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 717], 99.95th=[ 3228], 00:10:00.371 | 99.99th=[ 3228] 00:10:00.371 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:10:00.371 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:00.371 lat (usec) : 250=19.93%, 500=79.77%, 750=0.20%, 1000=0.03% 00:10:00.371 lat (msec) : 2=0.03%, 4=0.03% 00:10:00.371 cpu : usr=1.80%, sys=5.20%, ctx=3054, majf=0, minf=11 00:10:00.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.371 issued rwts: total=1509,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.371 job1: (groupid=0, jobs=1): err= 0: pid=68629: Thu May 16 18:31:13 2024 00:10:00.371 read: IOPS=1508, BW=6034KiB/s (6179kB/s)(6040KiB/1001msec) 00:10:00.371 slat (usec): min=9, max=125, avg=17.86, stdev= 5.84 00:10:00.371 clat (usec): min=239, max=1361, avg=337.72, stdev=50.44 00:10:00.371 lat (usec): min=257, max=1376, avg=355.58, stdev=50.58 00:10:00.371 clat percentiles (usec): 00:10:00.371 | 1.00th=[ 258], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 297], 00:10:00.371 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 351], 00:10:00.371 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 388], 95.00th=[ 400], 00:10:00.371 | 99.00th=[ 433], 99.50th=[ 461], 99.90th=[ 824], 99.95th=[ 1369], 00:10:00.371 | 99.99th=[ 1369] 00:10:00.371 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:10:00.371 slat (usec): min=12, max=135, avg=29.02, stdev=12.23 00:10:00.371 clat (usec): min=147, max=3120, avg=267.86, stdev=85.46 00:10:00.371 lat (usec): min=192, max=3140, avg=296.88, stdev=87.26 00:10:00.371 clat percentiles (usec): 00:10:00.371 | 1.00th=[ 184], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 231], 00:10:00.371 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 260], 60.00th=[ 273], 00:10:00.371 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 343], 00:10:00.372 | 99.00th=[ 392], 99.50th=[ 420], 99.90th=[ 717], 99.95th=[ 3130], 00:10:00.372 | 99.99th=[ 3130] 00:10:00.372 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:10:00.372 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:00.372 lat (usec) : 250=20.72%, 500=79.09%, 750=0.10%, 1000=0.03% 00:10:00.372 lat (msec) : 2=0.03%, 4=0.03% 00:10:00.372 cpu : usr=2.10%, sys=5.60%, ctx=3056, majf=0, minf=16 00:10:00.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.372 issued rwts: total=1510,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.372 job2: (groupid=0, jobs=1): err= 0: pid=68630: Thu May 16 18:31:13 2024 00:10:00.372 read: IOPS=1610, BW=6442KiB/s (6596kB/s)(6448KiB/1001msec) 00:10:00.372 slat (nsec): min=14940, max=50436, avg=19172.45, stdev=3859.44 00:10:00.372 clat (usec): min=181, max=640, avg=289.65, stdev=54.02 00:10:00.372 lat (usec): min=197, max=655, avg=308.82, stdev=54.66 00:10:00.372 clat percentiles (usec): 00:10:00.372 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 241], 00:10:00.372 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 277], 60.00th=[ 302], 00:10:00.372 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 375], 00:10:00.372 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 519], 99.95th=[ 644], 00:10:00.372 | 99.99th=[ 644] 00:10:00.372 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:00.372 slat (usec): min=22, max=156, avg=28.99, stdev= 7.20 00:10:00.372 clat (usec): min=118, max=1643, avg=212.20, stdev=58.04 00:10:00.372 lat (usec): min=142, max=1675, avg=241.19, stdev=59.35 00:10:00.372 clat percentiles (usec): 00:10:00.372 | 1.00th=[ 141], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 174], 00:10:00.372 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 215], 00:10:00.372 | 70.00th=[ 231], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 285], 00:10:00.372 | 99.00th=[ 367], 99.50th=[ 400], 99.90th=[ 635], 99.95th=[ 865], 00:10:00.372 | 99.99th=[ 1647] 00:10:00.372 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:10:00.372 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:00.372 lat (usec) : 250=58.55%, 500=41.20%, 750=0.19%, 1000=0.03% 00:10:00.372 lat (msec) : 2=0.03% 00:10:00.372 cpu : usr=2.10%, sys=6.90%, ctx=3660, majf=0, minf=5 00:10:00.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.372 issued rwts: total=1612,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.372 job3: (groupid=0, jobs=1): err= 0: pid=68631: Thu May 16 18:31:13 2024 00:10:00.372 read: IOPS=1634, BW=6537KiB/s (6694kB/s)(6544KiB/1001msec) 00:10:00.372 slat (nsec): min=11900, max=55219, avg=20917.09, stdev=5378.67 00:10:00.372 clat (usec): min=182, max=1200, avg=287.75, stdev=59.80 00:10:00.372 lat (usec): min=198, max=1216, avg=308.67, stdev=59.90 00:10:00.372 clat percentiles (usec): 00:10:00.372 | 1.00th=[ 200], 5.00th=[ 217], 10.00th=[ 227], 20.00th=[ 239], 00:10:00.372 | 30.00th=[ 247], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 297], 00:10:00.372 | 70.00th=[ 326], 80.00th=[ 347], 90.00th=[ 363], 95.00th=[ 375], 00:10:00.372 | 99.00th=[ 408], 99.50th=[ 416], 99.90th=[ 816], 99.95th=[ 1205], 00:10:00.372 | 99.99th=[ 1205] 00:10:00.372 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:00.372 slat (usec): min=15, max=173, avg=31.27, stdev= 9.53 00:10:00.372 clat (usec): min=122, max=1809, avg=205.74, stdev=56.01 00:10:00.372 lat (usec): min=145, max=1828, avg=237.00, stdev=56.53 00:10:00.372 clat percentiles (usec): 00:10:00.372 | 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 169], 00:10:00.372 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 206], 00:10:00.372 | 70.00th=[ 223], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 281], 00:10:00.372 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 400], 99.95th=[ 441], 00:10:00.372 | 99.99th=[ 1811] 00:10:00.372 bw ( KiB/s): min= 8192, max= 8192, per=28.60%, avg=8192.00, stdev= 0.00, samples=1 00:10:00.372 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:00.372 lat (usec) : 250=60.42%, 500=39.47%, 750=0.03%, 1000=0.03% 00:10:00.372 lat (msec) : 2=0.05% 00:10:00.372 cpu : usr=1.90%, sys=8.10%, ctx=3684, majf=0, minf=13 00:10:00.372 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:00.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:00.372 issued rwts: total=1636,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:00.372 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:00.372 00:10:00.372 Run status group 0 (all jobs): 00:10:00.372 READ: bw=24.5MiB/s (25.6MB/s), 6030KiB/s-6537KiB/s (6175kB/s-6694kB/s), io=24.5MiB (25.7MB), run=1001-1001msec 00:10:00.372 WRITE: bw=28.0MiB/s (29.3MB/s), 6138KiB/s-8184KiB/s (6285kB/s-8380kB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:10:00.372 00:10:00.372 Disk stats (read/write): 00:10:00.372 nvme0n1: ios=1165/1536, merge=0/0, ticks=386/388, in_queue=774, util=87.76% 00:10:00.372 nvme0n2: ios=1158/1536, merge=0/0, ticks=389/389, in_queue=778, util=87.80% 00:10:00.372 nvme0n3: ios=1483/1536, merge=0/0, ticks=433/355, in_queue=788, util=89.17% 00:10:00.372 nvme0n4: ios=1536/1553, merge=0/0, ticks=454/349, in_queue=803, util=89.73% 00:10:00.372 18:31:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:00.372 [global] 00:10:00.372 thread=1 00:10:00.372 invalidate=1 00:10:00.372 rw=write 00:10:00.372 time_based=1 00:10:00.372 runtime=1 00:10:00.372 ioengine=libaio 00:10:00.372 direct=1 00:10:00.372 bs=4096 00:10:00.372 iodepth=128 00:10:00.372 norandommap=0 00:10:00.372 numjobs=1 00:10:00.372 00:10:00.372 verify_dump=1 00:10:00.372 verify_backlog=512 00:10:00.372 verify_state_save=0 00:10:00.372 do_verify=1 00:10:00.372 verify=crc32c-intel 00:10:00.372 [job0] 00:10:00.372 filename=/dev/nvme0n1 00:10:00.372 [job1] 00:10:00.372 filename=/dev/nvme0n2 00:10:00.372 [job2] 00:10:00.372 filename=/dev/nvme0n3 00:10:00.372 [job3] 00:10:00.372 filename=/dev/nvme0n4 00:10:00.372 Could not set queue depth (nvme0n1) 00:10:00.372 Could not set queue depth (nvme0n2) 00:10:00.372 Could not set queue depth (nvme0n3) 00:10:00.372 Could not set queue depth (nvme0n4) 00:10:00.372 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.372 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.372 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.372 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.372 fio-3.35 00:10:00.372 Starting 4 threads 00:10:01.750 00:10:01.750 job0: (groupid=0, jobs=1): err= 0: pid=68685: Thu May 16 18:31:14 2024 00:10:01.750 read: IOPS=1816, BW=7265KiB/s (7440kB/s)(7316KiB/1007msec) 00:10:01.750 slat (usec): min=5, max=19264, avg=283.96, stdev=1367.25 00:10:01.750 clat (usec): min=5887, max=75138, avg=35308.00, stdev=10564.73 00:10:01.750 lat (usec): min=6922, max=75151, avg=35591.95, stdev=10550.92 00:10:01.750 clat percentiles (usec): 00:10:01.750 | 1.00th=[ 9503], 5.00th=[24511], 10.00th=[26870], 20.00th=[29492], 00:10:01.750 | 30.00th=[31065], 40.00th=[32113], 50.00th=[33817], 60.00th=[34866], 00:10:01.750 | 70.00th=[35914], 80.00th=[39060], 90.00th=[45876], 95.00th=[64226], 00:10:01.750 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:01.750 | 99.99th=[74974] 00:10:01.750 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:10:01.750 slat (usec): min=12, max=16287, avg=226.74, stdev=1022.47 00:10:01.750 clat (usec): min=17697, max=52001, avg=30236.73, stdev=7059.24 00:10:01.750 lat (usec): min=22692, max=52024, avg=30463.47, stdev=7048.61 00:10:01.750 clat percentiles (usec): 00:10:01.750 | 1.00th=[22676], 5.00th=[23725], 10.00th=[24249], 20.00th=[25035], 00:10:01.750 | 30.00th=[26346], 40.00th=[27395], 50.00th=[28181], 60.00th=[29230], 00:10:01.750 | 70.00th=[30802], 80.00th=[33162], 90.00th=[39584], 95.00th=[51119], 00:10:01.750 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[52167], 00:10:01.750 | 99.99th=[52167] 00:10:01.750 bw ( KiB/s): min= 8175, max= 8192, per=23.81%, avg=8183.50, stdev=12.02, samples=2 00:10:01.750 iops : min= 2043, max= 2048, avg=2045.50, stdev= 3.54, samples=2 00:10:01.750 lat (msec) : 10=0.72%, 20=1.13%, 50=90.90%, 100=7.25% 00:10:01.750 cpu : usr=1.89%, sys=6.16%, ctx=617, majf=0, minf=9 00:10:01.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:01.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.750 issued rwts: total=1829,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.750 job1: (groupid=0, jobs=1): err= 0: pid=68686: Thu May 16 18:31:14 2024 00:10:01.750 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:10:01.750 slat (usec): min=4, max=10539, avg=247.09, stdev=999.42 00:10:01.750 clat (usec): min=19836, max=50959, avg=32128.34, stdev=4529.70 00:10:01.750 lat (usec): min=19863, max=50995, avg=32375.43, stdev=4517.76 00:10:01.750 clat percentiles (usec): 00:10:01.750 | 1.00th=[20317], 5.00th=[24249], 10.00th=[26608], 20.00th=[28967], 00:10:01.750 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31851], 60.00th=[33162], 00:10:01.750 | 70.00th=[34341], 80.00th=[35914], 90.00th=[36963], 95.00th=[40109], 00:10:01.750 | 99.00th=[43254], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:01.750 | 99.99th=[51119] 00:10:01.750 write: IOPS=2489, BW=9958KiB/s (10.2MB/s)(9.79MiB/1007msec); 0 zone resets 00:10:01.750 slat (usec): min=9, max=15371, avg=188.63, stdev=876.42 00:10:01.750 clat (usec): min=6269, max=41292, avg=24363.60, stdev=4441.88 00:10:01.750 lat (usec): min=6394, max=41329, avg=24552.23, stdev=4498.32 00:10:01.750 clat percentiles (usec): 00:10:01.750 | 1.00th=[10159], 5.00th=[18220], 10.00th=[18744], 20.00th=[20055], 00:10:01.750 | 30.00th=[22152], 40.00th=[23725], 50.00th=[24511], 60.00th=[26084], 00:10:01.750 | 70.00th=[27132], 80.00th=[28181], 90.00th=[29230], 95.00th=[31327], 00:10:01.750 | 99.00th=[33162], 99.50th=[33162], 99.90th=[38011], 99.95th=[38536], 00:10:01.750 | 99.99th=[41157] 00:10:01.750 bw ( KiB/s): min= 8902, max=10120, per=27.68%, avg=9511.00, stdev=861.26, samples=2 00:10:01.750 iops : min= 2225, max= 2530, avg=2377.50, stdev=215.67, samples=2 00:10:01.750 lat (msec) : 10=0.55%, 20=10.30%, 50=89.13%, 100=0.02% 00:10:01.750 cpu : usr=2.88%, sys=7.36%, ctx=583, majf=0, minf=16 00:10:01.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:10:01.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.750 issued rwts: total=2048,2507,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.750 job2: (groupid=0, jobs=1): err= 0: pid=68687: Thu May 16 18:31:14 2024 00:10:01.750 read: IOPS=1859, BW=7439KiB/s (7618kB/s)(7484KiB/1006msec) 00:10:01.750 slat (usec): min=10, max=8874, avg=252.39, stdev=1305.53 00:10:01.750 clat (usec): min=5297, max=37174, avg=32081.78, stdev=4351.00 00:10:01.750 lat (usec): min=5312, max=37202, avg=32334.18, stdev=4166.98 00:10:01.750 clat percentiles (usec): 00:10:01.750 | 1.00th=[12780], 5.00th=[24773], 10.00th=[30802], 20.00th=[31589], 00:10:01.750 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32900], 60.00th=[32900], 00:10:01.750 | 70.00th=[33424], 80.00th=[33817], 90.00th=[35914], 95.00th=[35914], 00:10:01.750 | 99.00th=[36963], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:10:01.750 | 99.99th=[36963] 00:10:01.750 write: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec); 0 zone resets 00:10:01.750 slat (usec): min=13, max=8990, avg=248.19, stdev=1251.64 00:10:01.750 clat (usec): min=21716, max=35824, avg=32227.40, stdev=1798.30 00:10:01.750 lat (usec): min=26079, max=36082, avg=32475.59, stdev=1306.94 00:10:01.750 clat percentiles (usec): 00:10:01.750 | 1.00th=[25035], 5.00th=[29754], 10.00th=[30802], 20.00th=[31065], 00:10:01.750 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:10:01.750 | 70.00th=[32900], 80.00th=[33424], 90.00th=[34341], 95.00th=[34866], 00:10:01.750 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:10:01.750 | 99.99th=[35914] 00:10:01.750 bw ( KiB/s): min= 8175, max= 8208, per=23.84%, avg=8191.50, stdev=23.33, samples=2 00:10:01.750 iops : min= 2043, max= 2052, avg=2047.50, stdev= 6.36, samples=2 00:10:01.750 lat (msec) : 10=0.38%, 20=0.82%, 50=98.80% 00:10:01.750 cpu : usr=2.89%, sys=6.57%, ctx=123, majf=0, minf=5 00:10:01.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:01.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.750 issued rwts: total=1871,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.750 job3: (groupid=0, jobs=1): err= 0: pid=68688: Thu May 16 18:31:14 2024 00:10:01.750 read: IOPS=1879, BW=7518KiB/s (7699kB/s)(7556KiB/1005msec) 00:10:01.751 slat (usec): min=10, max=8835, avg=252.41, stdev=1304.93 00:10:01.751 clat (usec): min=215, max=37566, avg=31852.18, stdev=5073.74 00:10:01.751 lat (usec): min=5220, max=37593, avg=32104.59, stdev=4912.40 00:10:01.751 clat percentiles (usec): 00:10:01.751 | 1.00th=[ 5604], 5.00th=[21103], 10.00th=[30278], 20.00th=[31589], 00:10:01.751 | 30.00th=[31851], 40.00th=[32375], 50.00th=[32900], 60.00th=[33162], 00:10:01.751 | 70.00th=[33424], 80.00th=[33817], 90.00th=[35914], 95.00th=[36439], 00:10:01.751 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:10:01.751 | 99.99th=[37487] 00:10:01.751 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:10:01.751 slat (usec): min=12, max=9380, avg=248.52, stdev=1263.65 00:10:01.751 clat (usec): min=21529, max=36333, avg=32132.15, stdev=1845.76 00:10:01.751 lat (usec): min=24801, max=36362, avg=32380.67, stdev=1368.23 00:10:01.751 clat percentiles (usec): 00:10:01.751 | 1.00th=[24511], 5.00th=[29754], 10.00th=[30278], 20.00th=[31065], 00:10:01.751 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32375], 60.00th=[32637], 00:10:01.751 | 70.00th=[32900], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:10:01.751 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:10:01.751 | 99.99th=[36439] 00:10:01.751 bw ( KiB/s): min= 8192, max= 8192, per=23.84%, avg=8192.00, stdev= 0.00, samples=2 00:10:01.751 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:01.751 lat (usec) : 250=0.03% 00:10:01.751 lat (msec) : 10=0.81%, 20=0.81%, 50=98.35% 00:10:01.751 cpu : usr=1.79%, sys=6.18%, ctx=125, majf=0, minf=15 00:10:01.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:01.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:01.751 issued rwts: total=1889,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:01.751 00:10:01.751 Run status group 0 (all jobs): 00:10:01.751 READ: bw=29.6MiB/s (31.1MB/s), 7265KiB/s-8135KiB/s (7440kB/s-8330kB/s), io=29.8MiB (31.3MB), run=1005-1007msec 00:10:01.751 WRITE: bw=33.6MiB/s (35.2MB/s), 8135KiB/s-9958KiB/s (8330kB/s-10.2MB/s), io=33.8MiB (35.4MB), run=1005-1007msec 00:10:01.751 00:10:01.751 Disk stats (read/write): 00:10:01.751 nvme0n1: ios=1586/1761, merge=0/0, ticks=13857/12292, in_queue=26149, util=89.57% 00:10:01.751 nvme0n2: ios=1959/2048, merge=0/0, ticks=22659/16028, in_queue=38687, util=88.99% 00:10:01.751 nvme0n3: ios=1553/1824, merge=0/0, ticks=12235/14020, in_queue=26255, util=89.64% 00:10:01.751 nvme0n4: ios=1542/1824, merge=0/0, ticks=11061/12240, in_queue=23301, util=88.97% 00:10:01.751 18:31:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:01.751 [global] 00:10:01.751 thread=1 00:10:01.751 invalidate=1 00:10:01.751 rw=randwrite 00:10:01.751 time_based=1 00:10:01.751 runtime=1 00:10:01.751 ioengine=libaio 00:10:01.751 direct=1 00:10:01.751 bs=4096 00:10:01.751 iodepth=128 00:10:01.751 norandommap=0 00:10:01.751 numjobs=1 00:10:01.751 00:10:01.751 verify_dump=1 00:10:01.751 verify_backlog=512 00:10:01.751 verify_state_save=0 00:10:01.751 do_verify=1 00:10:01.751 verify=crc32c-intel 00:10:01.751 [job0] 00:10:01.751 filename=/dev/nvme0n1 00:10:01.751 [job1] 00:10:01.751 filename=/dev/nvme0n2 00:10:01.751 [job2] 00:10:01.751 filename=/dev/nvme0n3 00:10:01.751 [job3] 00:10:01.751 filename=/dev/nvme0n4 00:10:01.751 Could not set queue depth (nvme0n1) 00:10:01.751 Could not set queue depth (nvme0n2) 00:10:01.751 Could not set queue depth (nvme0n3) 00:10:01.751 Could not set queue depth (nvme0n4) 00:10:01.751 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.751 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.751 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.751 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.751 fio-3.35 00:10:01.751 Starting 4 threads 00:10:03.127 00:10:03.127 job0: (groupid=0, jobs=1): err= 0: pid=68748: Thu May 16 18:31:16 2024 00:10:03.127 read: IOPS=3754, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1005msec) 00:10:03.127 slat (usec): min=8, max=7491, avg=130.50, stdev=689.40 00:10:03.127 clat (usec): min=271, max=23343, avg=16093.68, stdev=2349.66 00:10:03.127 lat (usec): min=4968, max=25396, avg=16224.19, stdev=2409.91 00:10:03.127 clat percentiles (usec): 00:10:03.127 | 1.00th=[ 5735], 5.00th=[11994], 10.00th=[14353], 20.00th=[15008], 00:10:03.127 | 30.00th=[15533], 40.00th=[15795], 50.00th=[16319], 60.00th=[16712], 00:10:03.127 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[19530], 00:10:03.127 | 99.00th=[21890], 99.50th=[22152], 99.90th=[23200], 99.95th=[23200], 00:10:03.127 | 99.99th=[23462] 00:10:03.127 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:10:03.127 slat (usec): min=12, max=7351, avg=116.55, stdev=605.48 00:10:03.127 clat (usec): min=6818, max=24526, avg=16158.14, stdev=1894.06 00:10:03.127 lat (usec): min=6847, max=24552, avg=16274.69, stdev=1976.73 00:10:03.127 clat percentiles (usec): 00:10:03.127 | 1.00th=[10945], 5.00th=[13042], 10.00th=[14222], 20.00th=[15008], 00:10:03.127 | 30.00th=[15664], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:10:03.127 | 70.00th=[16712], 80.00th=[16909], 90.00th=[17695], 95.00th=[18744], 00:10:03.127 | 99.00th=[22676], 99.50th=[23462], 99.90th=[23725], 99.95th=[24511], 00:10:03.127 | 99.99th=[24511] 00:10:03.127 bw ( KiB/s): min=16384, max=16416, per=36.84%, avg=16400.00, stdev=22.63, samples=2 00:10:03.127 iops : min= 4096, max= 4104, avg=4100.00, stdev= 5.66, samples=2 00:10:03.127 lat (usec) : 500=0.01% 00:10:03.127 lat (msec) : 10=1.18%, 20=94.74%, 50=4.07% 00:10:03.127 cpu : usr=4.08%, sys=11.45%, ctx=402, majf=0, minf=15 00:10:03.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:03.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.127 issued rwts: total=3773,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.127 job1: (groupid=0, jobs=1): err= 0: pid=68749: Thu May 16 18:31:16 2024 00:10:03.127 read: IOPS=2041, BW=8167KiB/s (8364kB/s)(8192KiB/1003msec) 00:10:03.127 slat (usec): min=6, max=24041, avg=243.63, stdev=1738.89 00:10:03.127 clat (usec): min=16961, max=58118, avg=33047.08, stdev=6369.14 00:10:03.127 lat (usec): min=17000, max=64721, avg=33290.70, stdev=6499.05 00:10:03.127 clat percentiles (usec): 00:10:03.127 | 1.00th=[21103], 5.00th=[23725], 10.00th=[23987], 20.00th=[25035], 00:10:03.127 | 30.00th=[31065], 40.00th=[33424], 50.00th=[34866], 60.00th=[35390], 00:10:03.127 | 70.00th=[35914], 80.00th=[36439], 90.00th=[41157], 95.00th=[43779], 00:10:03.127 | 99.00th=[47973], 99.50th=[49546], 99.90th=[52691], 99.95th=[57410], 00:10:03.127 | 99.99th=[57934] 00:10:03.127 write: IOPS=2168, BW=8674KiB/s (8882kB/s)(8700KiB/1003msec); 0 zone resets 00:10:03.127 slat (usec): min=10, max=21039, avg=221.68, stdev=1552.50 00:10:03.127 clat (usec): min=1383, max=41219, avg=27275.94, stdev=7599.32 00:10:03.127 lat (usec): min=7052, max=41246, avg=27497.62, stdev=7502.75 00:10:03.127 clat percentiles (usec): 00:10:03.127 | 1.00th=[ 7635], 5.00th=[16909], 10.00th=[17695], 20.00th=[18482], 00:10:03.127 | 30.00th=[20579], 40.00th=[27395], 50.00th=[31065], 60.00th=[32113], 00:10:03.127 | 70.00th=[32375], 80.00th=[33162], 90.00th=[34341], 95.00th=[36439], 00:10:03.127 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:03.127 | 99.99th=[41157] 00:10:03.127 bw ( KiB/s): min= 8208, max= 8248, per=18.48%, avg=8228.00, stdev=28.28, samples=2 00:10:03.127 iops : min= 2052, max= 2062, avg=2057.00, stdev= 7.07, samples=2 00:10:03.127 lat (msec) : 2=0.02%, 10=1.14%, 20=11.70%, 50=86.98%, 100=0.17% 00:10:03.127 cpu : usr=2.30%, sys=6.29%, ctx=92, majf=0, minf=7 00:10:03.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:10:03.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.127 issued rwts: total=2048,2175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.127 job2: (groupid=0, jobs=1): err= 0: pid=68750: Thu May 16 18:31:16 2024 00:10:03.127 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:10:03.127 slat (usec): min=7, max=5554, avg=159.12, stdev=792.60 00:10:03.127 clat (usec): min=4998, max=22862, avg=20669.03, stdev=2202.29 00:10:03.127 lat (usec): min=5017, max=22877, avg=20828.14, stdev=2066.64 00:10:03.127 clat percentiles (usec): 00:10:03.127 | 1.00th=[ 5669], 5.00th=[16712], 10.00th=[19792], 20.00th=[20317], 00:10:03.127 | 30.00th=[20579], 40.00th=[20841], 50.00th=[21103], 60.00th=[21365], 00:10:03.127 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22414], 00:10:03.127 | 99.00th=[22676], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:10:03.127 | 99.99th=[22938] 00:10:03.127 write: IOPS=3066, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:10:03.127 slat (usec): min=15, max=5077, avg=157.19, stdev=726.50 00:10:03.127 clat (usec): min=663, max=21957, avg=20400.22, stdev=1101.82 00:10:03.127 lat (usec): min=4987, max=22217, avg=20557.41, stdev=808.83 00:10:03.127 clat percentiles (usec): 00:10:03.127 | 1.00th=[16188], 5.00th=[18744], 10.00th=[19268], 20.00th=[19792], 00:10:03.127 | 30.00th=[20055], 40.00th=[20579], 50.00th=[20579], 60.00th=[20841], 00:10:03.127 | 70.00th=[21103], 80.00th=[21103], 90.00th=[21365], 95.00th=[21627], 00:10:03.127 | 99.00th=[21890], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:10:03.127 | 99.99th=[21890] 00:10:03.127 bw ( KiB/s): min=12288, max=12288, per=27.60%, avg=12288.00, stdev= 0.00, samples=2 00:10:03.127 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:03.127 lat (usec) : 750=0.02% 00:10:03.127 lat (msec) : 10=0.52%, 20=18.62%, 50=80.85% 00:10:03.127 cpu : usr=3.40%, sys=9.89%, ctx=195, majf=0, minf=9 00:10:03.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:03.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.127 issued rwts: total=3072,3073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.127 job3: (groupid=0, jobs=1): err= 0: pid=68751: Thu May 16 18:31:16 2024 00:10:03.127 read: IOPS=1525, BW=6101KiB/s (6248kB/s)(6144KiB/1007msec) 00:10:03.127 slat (usec): min=7, max=13956, avg=247.21, stdev=1270.63 00:10:03.127 clat (usec): min=23952, max=93822, avg=35348.44, stdev=6697.97 00:10:03.127 lat (usec): min=23987, max=93877, avg=35595.65, stdev=6664.96 00:10:03.127 clat percentiles (usec): 00:10:03.127 | 1.00th=[25297], 5.00th=[28443], 10.00th=[31589], 20.00th=[33162], 00:10:03.127 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:10:03.127 | 70.00th=[35390], 80.00th=[35914], 90.00th=[38011], 95.00th=[43779], 00:10:03.127 | 99.00th=[80217], 99.50th=[84411], 99.90th=[89654], 99.95th=[93848], 00:10:03.127 | 99.99th=[93848] 00:10:03.127 write: IOPS=1850, BW=7400KiB/s (7578kB/s)(7452KiB/1007msec); 0 zone resets 00:10:03.127 slat (usec): min=19, max=22341, avg=323.64, stdev=1858.54 00:10:03.127 clat (msec): min=5, max=101, avg=38.55, stdev=15.73 00:10:03.127 lat (msec): min=12, max=101, avg=38.88, stdev=15.87 00:10:03.127 clat percentiles (msec): 00:10:03.127 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 32], 00:10:03.127 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 35], 00:10:03.127 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 63], 95.00th=[ 85], 00:10:03.127 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 00:10:03.127 | 99.99th=[ 102] 00:10:03.127 bw ( KiB/s): min= 5688, max= 8208, per=15.61%, avg=6948.00, stdev=1781.91, samples=2 00:10:03.127 iops : min= 1422, max= 2052, avg=1737.00, stdev=445.48, samples=2 00:10:03.127 lat (msec) : 10=0.03%, 20=0.24%, 50=91.23%, 100=8.09%, 250=0.41% 00:10:03.127 cpu : usr=1.99%, sys=5.96%, ctx=137, majf=0, minf=19 00:10:03.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:10:03.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.127 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.127 issued rwts: total=1536,1863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.127 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.127 00:10:03.127 Run status group 0 (all jobs): 00:10:03.127 READ: bw=40.5MiB/s (42.4MB/s), 6101KiB/s-14.7MiB/s (6248kB/s-15.4MB/s), io=40.7MiB (42.7MB), run=1002-1007msec 00:10:03.127 WRITE: bw=43.5MiB/s (45.6MB/s), 7400KiB/s-15.9MiB/s (7578kB/s-16.7MB/s), io=43.8MiB (45.9MB), run=1002-1007msec 00:10:03.127 00:10:03.127 Disk stats (read/write): 00:10:03.127 nvme0n1: ios=3122/3583, merge=0/0, ticks=23785/25359, in_queue=49144, util=87.76% 00:10:03.127 nvme0n2: ios=1576/1728, merge=0/0, ticks=53839/49216, in_queue=103055, util=87.63% 00:10:03.127 nvme0n3: ios=2560/2688, merge=0/0, ticks=12539/12172, in_queue=24711, util=89.10% 00:10:03.127 nvme0n4: ios=1536/1567, merge=0/0, ticks=25654/24282, in_queue=49936, util=89.64% 00:10:03.127 18:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:03.127 18:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68764 00:10:03.127 18:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:03.127 18:31:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:03.127 [global] 00:10:03.127 thread=1 00:10:03.127 invalidate=1 00:10:03.127 rw=read 00:10:03.127 time_based=1 00:10:03.127 runtime=10 00:10:03.127 ioengine=libaio 00:10:03.127 direct=1 00:10:03.127 bs=4096 00:10:03.127 iodepth=1 00:10:03.127 norandommap=1 00:10:03.127 numjobs=1 00:10:03.127 00:10:03.127 [job0] 00:10:03.127 filename=/dev/nvme0n1 00:10:03.127 [job1] 00:10:03.127 filename=/dev/nvme0n2 00:10:03.127 [job2] 00:10:03.127 filename=/dev/nvme0n3 00:10:03.128 [job3] 00:10:03.128 filename=/dev/nvme0n4 00:10:03.128 Could not set queue depth (nvme0n1) 00:10:03.128 Could not set queue depth (nvme0n2) 00:10:03.128 Could not set queue depth (nvme0n3) 00:10:03.128 Could not set queue depth (nvme0n4) 00:10:03.128 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.128 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.128 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.128 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.128 fio-3.35 00:10:03.128 Starting 4 threads 00:10:06.408 18:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:06.408 fio: pid=68812, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:06.408 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=50229248, buflen=4096 00:10:06.408 18:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:06.408 fio: pid=68811, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:06.408 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=33902592, buflen=4096 00:10:06.408 18:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.408 18:31:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:06.974 fio: pid=68809, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:06.974 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=39510016, buflen=4096 00:10:06.974 18:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:06.974 18:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:07.233 fio: pid=68810, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:07.233 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=59297792, buflen=4096 00:10:07.233 00:10:07.233 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68809: Thu May 16 18:31:20 2024 00:10:07.233 read: IOPS=2756, BW=10.8MiB/s (11.3MB/s)(37.7MiB/3500msec) 00:10:07.233 slat (usec): min=12, max=17832, avg=29.75, stdev=288.18 00:10:07.233 clat (usec): min=141, max=2877, avg=330.53, stdev=82.92 00:10:07.233 lat (usec): min=157, max=18210, avg=360.28, stdev=300.74 00:10:07.233 clat percentiles (usec): 00:10:07.233 | 1.00th=[ 174], 5.00th=[ 200], 10.00th=[ 219], 20.00th=[ 273], 00:10:07.233 | 30.00th=[ 297], 40.00th=[ 318], 50.00th=[ 343], 60.00th=[ 359], 00:10:07.233 | 70.00th=[ 371], 80.00th=[ 388], 90.00th=[ 408], 95.00th=[ 424], 00:10:07.233 | 99.00th=[ 474], 99.50th=[ 523], 99.90th=[ 955], 99.95th=[ 1467], 00:10:07.233 | 99.99th=[ 2868] 00:10:07.233 bw ( KiB/s): min= 9648, max=11880, per=22.44%, avg=10500.00, stdev=986.02, samples=6 00:10:07.233 iops : min= 2412, max= 2970, avg=2625.00, stdev=246.50, samples=6 00:10:07.233 lat (usec) : 250=14.73%, 500=84.68%, 750=0.40%, 1000=0.09% 00:10:07.233 lat (msec) : 2=0.07%, 4=0.01% 00:10:07.233 cpu : usr=1.37%, sys=5.74%, ctx=9657, majf=0, minf=1 00:10:07.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.233 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.233 issued rwts: total=9647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.233 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68810: Thu May 16 18:31:20 2024 00:10:07.233 read: IOPS=3792, BW=14.8MiB/s (15.5MB/s)(56.6MiB/3818msec) 00:10:07.233 slat (usec): min=13, max=15726, avg=22.49, stdev=252.29 00:10:07.233 clat (usec): min=3, max=3417, avg=239.51, stdev=61.27 00:10:07.233 lat (usec): min=159, max=16037, avg=262.00, stdev=260.01 00:10:07.233 clat percentiles (usec): 00:10:07.233 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 212], 00:10:07.233 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:10:07.233 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 289], 00:10:07.233 | 99.00th=[ 326], 99.50th=[ 375], 99.90th=[ 553], 99.95th=[ 1614], 00:10:07.233 | 99.99th=[ 2802] 00:10:07.233 bw ( KiB/s): min=13181, max=16632, per=32.26%, avg=15094.43, stdev=1599.71, samples=7 00:10:07.233 iops : min= 3295, max= 4158, avg=3773.57, stdev=399.98, samples=7 00:10:07.233 lat (usec) : 4=0.01%, 250=68.59%, 500=31.25%, 750=0.07%, 1000=0.01% 00:10:07.233 lat (msec) : 2=0.03%, 4=0.03% 00:10:07.233 cpu : usr=1.05%, sys=5.89%, ctx=14491, majf=0, minf=1 00:10:07.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.233 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.233 issued rwts: total=14478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.233 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.233 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68811: Thu May 16 18:31:20 2024 00:10:07.233 read: IOPS=2599, BW=10.2MiB/s (10.6MB/s)(32.3MiB/3184msec) 00:10:07.233 slat (usec): min=11, max=11204, avg=23.55, stdev=169.62 00:10:07.233 clat (usec): min=167, max=6555, avg=358.64, stdev=130.00 00:10:07.233 lat (usec): min=186, max=11567, avg=382.20, stdev=214.90 00:10:07.233 clat percentiles (usec): 00:10:07.233 | 1.00th=[ 225], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 306], 00:10:07.233 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 359], 60.00th=[ 371], 00:10:07.233 | 70.00th=[ 383], 80.00th=[ 396], 90.00th=[ 412], 95.00th=[ 433], 00:10:07.233 | 99.00th=[ 570], 99.50th=[ 619], 99.90th=[ 1909], 99.95th=[ 3621], 00:10:07.233 | 99.99th=[ 6587] 00:10:07.233 bw ( KiB/s): min= 9536, max=11880, per=22.42%, avg=10489.33, stdev=1063.94, samples=6 00:10:07.233 iops : min= 2384, max= 2970, avg=2622.33, stdev=265.98, samples=6 00:10:07.233 lat (usec) : 250=1.61%, 500=96.65%, 750=1.49%, 1000=0.06% 00:10:07.233 lat (msec) : 2=0.08%, 4=0.06%, 10=0.04% 00:10:07.233 cpu : usr=1.23%, sys=4.84%, ctx=8305, majf=0, minf=1 00:10:07.233 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.234 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.234 issued rwts: total=8278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.234 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=68812: Thu May 16 18:31:20 2024 00:10:07.234 read: IOPS=4197, BW=16.4MiB/s (17.2MB/s)(47.9MiB/2922msec) 00:10:07.234 slat (usec): min=11, max=122, avg=14.38, stdev= 4.50 00:10:07.234 clat (usec): min=145, max=1242, avg=222.27, stdev=32.93 00:10:07.234 lat (usec): min=158, max=1266, avg=236.65, stdev=34.34 00:10:07.234 clat percentiles (usec): 00:10:07.234 | 1.00th=[ 167], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 200], 00:10:07.234 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:10:07.234 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 269], 00:10:07.234 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 445], 99.95th=[ 742], 00:10:07.234 | 99.99th=[ 1106] 00:10:07.234 bw ( KiB/s): min=15480, max=17952, per=35.51%, avg=16616.00, stdev=947.77, samples=5 00:10:07.234 iops : min= 3870, max= 4488, avg=4154.00, stdev=236.94, samples=5 00:10:07.234 lat (usec) : 250=86.44%, 500=13.46%, 750=0.04%, 1000=0.03% 00:10:07.234 lat (msec) : 2=0.02% 00:10:07.234 cpu : usr=1.54%, sys=5.24%, ctx=12265, majf=0, minf=1 00:10:07.234 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:07.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.234 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.234 issued rwts: total=12264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.234 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:07.234 00:10:07.234 Run status group 0 (all jobs): 00:10:07.234 READ: bw=45.7MiB/s (47.9MB/s), 10.2MiB/s-16.4MiB/s (10.6MB/s-17.2MB/s), io=174MiB (183MB), run=2922-3818msec 00:10:07.234 00:10:07.234 Disk stats (read/write): 00:10:07.234 nvme0n1: ios=9107/0, merge=0/0, ticks=3109/0, in_queue=3109, util=94.77% 00:10:07.234 nvme0n2: ios=13590/0, merge=0/0, ticks=3349/0, in_queue=3349, util=94.92% 00:10:07.234 nvme0n3: ios=8151/0, merge=0/0, ticks=2916/0, in_queue=2916, util=96.06% 00:10:07.234 nvme0n4: ios=12023/0, merge=0/0, ticks=2708/0, in_queue=2708, util=96.73% 00:10:07.234 18:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.234 18:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:07.492 18:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.492 18:31:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:07.750 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.750 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:08.008 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.008 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:08.266 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.266 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 68764 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.523 nvmf hotplug test: fio failed as expected 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:08.523 18:31:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.093 rmmod nvme_tcp 00:10:09.093 rmmod nvme_fabrics 00:10:09.093 rmmod nvme_keyring 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68379 ']' 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68379 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 68379 ']' 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 68379 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68379 00:10:09.093 killing process with pid 68379 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68379' 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 68379 00:10:09.093 [2024-05-16 18:31:22.380967] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:09.093 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 68379 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:09.352 00:10:09.352 real 0m19.814s 00:10:09.352 user 1m16.134s 00:10:09.352 sys 0m8.797s 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:09.352 18:31:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.352 ************************************ 00:10:09.352 END TEST nvmf_fio_target 00:10:09.352 ************************************ 00:10:09.352 18:31:22 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:09.352 18:31:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:09.352 18:31:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:09.352 18:31:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.352 ************************************ 00:10:09.352 START TEST nvmf_bdevio 00:10:09.352 ************************************ 00:10:09.352 18:31:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:09.610 * Looking for test storage... 00:10:09.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:09.610 18:31:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:09.610 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:09.610 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.610 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:09.611 Cannot find device "nvmf_tgt_br" 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.611 Cannot find device "nvmf_tgt_br2" 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:09.611 Cannot find device "nvmf_tgt_br" 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:09.611 Cannot find device "nvmf_tgt_br2" 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:09.611 18:31:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:09.611 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:09.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:10:09.870 00:10:09.870 --- 10.0.0.2 ping statistics --- 00:10:09.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.870 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:09.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:09.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:10:09.870 00:10:09.870 --- 10.0.0.3 ping statistics --- 00:10:09.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.870 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:09.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:10:09.870 00:10:09.870 --- 10.0.0.1 ping statistics --- 00:10:09.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.870 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.870 18:31:23 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69085 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69085 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 69085 ']' 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:09.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:09.871 18:31:23 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:09.871 [2024-05-16 18:31:23.343702] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:09.871 [2024-05-16 18:31:23.343845] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.129 [2024-05-16 18:31:23.485938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:10.129 [2024-05-16 18:31:23.607708] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.129 [2024-05-16 18:31:23.608270] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.129 [2024-05-16 18:31:23.608762] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.129 [2024-05-16 18:31:23.609267] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.129 [2024-05-16 18:31:23.609585] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.129 [2024-05-16 18:31:23.610070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.129 [2024-05-16 18:31:23.610240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:10.129 [2024-05-16 18:31:23.610393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:10.129 [2024-05-16 18:31:23.610544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.388 [2024-05-16 18:31:23.664574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.955 [2024-05-16 18:31:24.355978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.955 Malloc0 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:10.955 [2024-05-16 18:31:24.436070] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:10.955 [2024-05-16 18:31:24.436496] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:10.955 { 00:10:10.955 "params": { 00:10:10.955 "name": "Nvme$subsystem", 00:10:10.955 "trtype": "$TEST_TRANSPORT", 00:10:10.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:10.955 "adrfam": "ipv4", 00:10:10.955 "trsvcid": "$NVMF_PORT", 00:10:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:10.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:10.955 "hdgst": ${hdgst:-false}, 00:10:10.955 "ddgst": ${ddgst:-false} 00:10:10.955 }, 00:10:10.955 "method": "bdev_nvme_attach_controller" 00:10:10.955 } 00:10:10.955 EOF 00:10:10.955 )") 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:10.955 18:31:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:10.955 "params": { 00:10:10.955 "name": "Nvme1", 00:10:10.955 "trtype": "tcp", 00:10:10.955 "traddr": "10.0.0.2", 00:10:10.955 "adrfam": "ipv4", 00:10:10.955 "trsvcid": "4420", 00:10:10.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:10.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:10.955 "hdgst": false, 00:10:10.955 "ddgst": false 00:10:10.955 }, 00:10:10.955 "method": "bdev_nvme_attach_controller" 00:10:10.955 }' 00:10:11.214 [2024-05-16 18:31:24.494468] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:10:11.214 [2024-05-16 18:31:24.494589] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69121 ] 00:10:11.214 [2024-05-16 18:31:24.636026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:11.472 [2024-05-16 18:31:24.798936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.472 [2024-05-16 18:31:24.799131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.472 [2024-05-16 18:31:24.799136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.472 [2024-05-16 18:31:24.884986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:11.732 I/O targets: 00:10:11.732 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:11.732 00:10:11.732 00:10:11.732 CUnit - A unit testing framework for C - Version 2.1-3 00:10:11.732 http://cunit.sourceforge.net/ 00:10:11.732 00:10:11.732 00:10:11.732 Suite: bdevio tests on: Nvme1n1 00:10:11.732 Test: blockdev write read block ...passed 00:10:11.732 Test: blockdev write zeroes read block ...passed 00:10:11.732 Test: blockdev write zeroes read no split ...passed 00:10:11.732 Test: blockdev write zeroes read split ...passed 00:10:11.732 Test: blockdev write zeroes read split partial ...passed 00:10:11.732 Test: blockdev reset ...[2024-05-16 18:31:25.050088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:11.732 [2024-05-16 18:31:25.050479] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec2360 (9): Bad file descriptor 00:10:11.732 passed 00:10:11.732 Test: blockdev write read 8 blocks ...[2024-05-16 18:31:25.063028] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:11.732 passed 00:10:11.732 Test: blockdev write read size > 128k ...passed 00:10:11.732 Test: blockdev write read invalid size ...passed 00:10:11.732 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:11.732 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:11.732 Test: blockdev write read max offset ...passed 00:10:11.732 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:11.732 Test: blockdev writev readv 8 blocks ...passed 00:10:11.732 Test: blockdev writev readv 30 x 1block ...passed 00:10:11.732 Test: blockdev writev readv block ...passed 00:10:11.732 Test: blockdev writev readv size > 128k ...passed 00:10:11.732 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:11.732 Test: blockdev comparev and writev ...[2024-05-16 18:31:25.072160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.072345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.072373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.072385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.072787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.072811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.072843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.072855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.073193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.073222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.073240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.073251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.073581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.073602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.073619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:11.732 [2024-05-16 18:31:25.073629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:11.732 passed 00:10:11.732 Test: blockdev nvme passthru rw ...passed 00:10:11.732 Test: blockdev nvme passthru vendor specific ...[2024-05-16 18:31:25.074540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.732 [2024-05-16 18:31:25.074565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.074689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.732 [2024-05-16 18:31:25.074710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.074869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.732 [2024-05-16 18:31:25.074894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:11.732 [2024-05-16 18:31:25.075021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:11.732 [2024-05-16 18:31:25.075052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:11.732 passed 00:10:11.732 Test: blockdev nvme admin passthru ...passed 00:10:11.732 Test: blockdev copy ...passed 00:10:11.732 00:10:11.732 Run Summary: Type Total Ran Passed Failed Inactive 00:10:11.732 suites 1 1 n/a 0 0 00:10:11.732 tests 23 23 23 0 0 00:10:11.732 asserts 152 152 152 0 n/a 00:10:11.732 00:10:11.732 Elapsed time = 0.157 seconds 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.991 rmmod nvme_tcp 00:10:11.991 rmmod nvme_fabrics 00:10:11.991 rmmod nvme_keyring 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69085 ']' 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69085 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 69085 ']' 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 69085 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:11.991 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69085 00:10:12.251 killing process with pid 69085 00:10:12.251 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:10:12.251 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:10:12.251 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69085' 00:10:12.251 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 69085 00:10:12.251 [2024-05-16 18:31:25.503208] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:12.251 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 69085 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.511 ************************************ 00:10:12.511 END TEST nvmf_bdevio 00:10:12.511 ************************************ 00:10:12.511 00:10:12.511 real 0m3.032s 00:10:12.511 user 0m10.125s 00:10:12.511 sys 0m0.850s 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.511 18:31:25 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:12.511 18:31:25 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:12.511 18:31:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:12.511 18:31:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.511 18:31:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.511 ************************************ 00:10:12.511 START TEST nvmf_auth_target 00:10:12.511 ************************************ 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:12.511 * Looking for test storage... 00:10:12.511 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.511 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.512 18:31:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.512 Cannot find device "nvmf_tgt_br" 00:10:12.512 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:12.512 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.771 Cannot find device "nvmf_tgt_br2" 00:10:12.771 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:12.771 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.771 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.771 Cannot find device "nvmf_tgt_br" 00:10:12.771 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:12.771 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.771 Cannot find device "nvmf_tgt_br2" 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.772 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.772 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:13.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:10:13.031 00:10:13.031 --- 10.0.0.2 ping statistics --- 00:10:13.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.031 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:13.031 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.031 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:13.031 00:10:13.031 --- 10.0.0.3 ping statistics --- 00:10:13.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.031 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:10:13.031 00:10:13.031 --- 10.0.0.1 ping statistics --- 00:10:13.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.031 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69296 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69296 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 69296 ']' 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.031 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:13.032 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.032 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:13.032 18:31:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.967 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:13.967 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:10:13.967 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.967 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.967 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69328 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ae28b2a028b03b9e25349e2bd9daf7866584d04c5fb8adc5 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rDU 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ae28b2a028b03b9e25349e2bd9daf7866584d04c5fb8adc5 0 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ae28b2a028b03b9e25349e2bd9daf7866584d04c5fb8adc5 0 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ae28b2a028b03b9e25349e2bd9daf7866584d04c5fb8adc5 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:14.226 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rDU 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rDU 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.rDU 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2e59c941fc9661dd76d0356d02041e70d34c4127e17b270e70dfc48c41cabbb0 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vxu 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2e59c941fc9661dd76d0356d02041e70d34c4127e17b270e70dfc48c41cabbb0 3 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2e59c941fc9661dd76d0356d02041e70d34c4127e17b270e70dfc48c41cabbb0 3 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2e59c941fc9661dd76d0356d02041e70d34c4127e17b270e70dfc48c41cabbb0 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vxu 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vxu 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.vxu 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=56fbc308f932e39afbec1da2ceb0e43a 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UhS 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 56fbc308f932e39afbec1da2ceb0e43a 1 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 56fbc308f932e39afbec1da2ceb0e43a 1 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=56fbc308f932e39afbec1da2ceb0e43a 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UhS 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UhS 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.UhS 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b3b7146469ab72457e4a8433e66c730a75ab3657a535d78 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aDJ 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b3b7146469ab72457e4a8433e66c730a75ab3657a535d78 2 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b3b7146469ab72457e4a8433e66c730a75ab3657a535d78 2 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b3b7146469ab72457e4a8433e66c730a75ab3657a535d78 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:14.227 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:14.494 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aDJ 00:10:14.494 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aDJ 00:10:14.494 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.aDJ 00:10:14.494 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:14.494 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a210cbf495c2fb3500322b865a1c7a9ed0687bdcb3ef8f8 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.baA 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a210cbf495c2fb3500322b865a1c7a9ed0687bdcb3ef8f8 2 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a210cbf495c2fb3500322b865a1c7a9ed0687bdcb3ef8f8 2 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a210cbf495c2fb3500322b865a1c7a9ed0687bdcb3ef8f8 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.baA 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.baA 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.baA 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=55739dd6cb4d62a02ea93b5fa05a82cc 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Z27 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 55739dd6cb4d62a02ea93b5fa05a82cc 1 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 55739dd6cb4d62a02ea93b5fa05a82cc 1 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=55739dd6cb4d62a02ea93b5fa05a82cc 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Z27 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Z27 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Z27 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=55db220cd4f56e7bd60b2cdaad8467e123094ef52477a41888a27c677c7bcb85 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.SSN 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 55db220cd4f56e7bd60b2cdaad8467e123094ef52477a41888a27c677c7bcb85 3 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 55db220cd4f56e7bd60b2cdaad8467e123094ef52477a41888a27c677c7bcb85 3 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=55db220cd4f56e7bd60b2cdaad8467e123094ef52477a41888a27c677c7bcb85 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.SSN 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.SSN 00:10:14.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.SSN 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69296 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 69296 ']' 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:14.495 18:31:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69328 /var/tmp/host.sock 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 69328 ']' 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:14.764 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.330 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rDU 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rDU 00:10:15.331 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rDU 00:10:15.589 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.vxu ]] 00:10:15.589 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vxu 00:10:15.589 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.589 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.589 18:31:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.589 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vxu 00:10:15.589 18:31:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.vxu 00:10:15.856 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:15.856 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UhS 00:10:15.856 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.856 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.856 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.856 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UhS 00:10:15.856 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UhS 00:10:16.162 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.aDJ ]] 00:10:16.162 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aDJ 00:10:16.162 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.162 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.162 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.162 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aDJ 00:10:16.162 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aDJ 00:10:16.420 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:16.420 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.baA 00:10:16.421 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.421 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.421 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.421 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.baA 00:10:16.421 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.baA 00:10:16.680 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Z27 ]] 00:10:16.680 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z27 00:10:16.680 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.680 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.680 18:31:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.680 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z27 00:10:16.680 18:31:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Z27 00:10:16.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:16.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SSN 00:10:16.938 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.938 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.938 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SSN 00:10:16.938 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SSN 00:10:17.197 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:17.197 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:17.197 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.198 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:17.767 00:10:17.767 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:17.767 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.767 18:31:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:18.026 { 00:10:18.026 "cntlid": 1, 00:10:18.026 "qid": 0, 00:10:18.026 "state": "enabled", 00:10:18.026 "listen_address": { 00:10:18.026 "trtype": "TCP", 00:10:18.026 "adrfam": "IPv4", 00:10:18.026 "traddr": "10.0.0.2", 00:10:18.026 "trsvcid": "4420" 00:10:18.026 }, 00:10:18.026 "peer_address": { 00:10:18.026 "trtype": "TCP", 00:10:18.026 "adrfam": "IPv4", 00:10:18.026 "traddr": "10.0.0.1", 00:10:18.026 "trsvcid": "35350" 00:10:18.026 }, 00:10:18.026 "auth": { 00:10:18.026 "state": "completed", 00:10:18.026 "digest": "sha256", 00:10:18.026 "dhgroup": "null" 00:10:18.026 } 00:10:18.026 } 00:10:18.026 ]' 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:18.026 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.285 18:31:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.566 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.566 18:31:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:23.825 { 00:10:23.825 "cntlid": 3, 00:10:23.825 "qid": 0, 00:10:23.825 "state": "enabled", 00:10:23.825 "listen_address": { 00:10:23.825 "trtype": "TCP", 00:10:23.825 "adrfam": "IPv4", 00:10:23.825 "traddr": "10.0.0.2", 00:10:23.825 "trsvcid": "4420" 00:10:23.825 }, 00:10:23.825 "peer_address": { 00:10:23.825 "trtype": "TCP", 00:10:23.825 "adrfam": "IPv4", 00:10:23.825 "traddr": "10.0.0.1", 00:10:23.825 "trsvcid": "44836" 00:10:23.825 }, 00:10:23.825 "auth": { 00:10:23.825 "state": "completed", 00:10:23.825 "digest": "sha256", 00:10:23.825 "dhgroup": "null" 00:10:23.825 } 00:10:23.825 } 00:10:23.825 ]' 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:23.825 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.083 18:31:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.051 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.052 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.052 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.052 18:31:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.052 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.052 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.311 00:10:25.569 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:25.569 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:25.569 18:31:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:25.569 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:25.569 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:25.569 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.569 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.569 18:31:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.569 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:25.569 { 00:10:25.569 "cntlid": 5, 00:10:25.569 "qid": 0, 00:10:25.569 "state": "enabled", 00:10:25.569 "listen_address": { 00:10:25.569 "trtype": "TCP", 00:10:25.569 "adrfam": "IPv4", 00:10:25.569 "traddr": "10.0.0.2", 00:10:25.569 "trsvcid": "4420" 00:10:25.569 }, 00:10:25.569 "peer_address": { 00:10:25.569 "trtype": "TCP", 00:10:25.569 "adrfam": "IPv4", 00:10:25.569 "traddr": "10.0.0.1", 00:10:25.569 "trsvcid": "44866" 00:10:25.569 }, 00:10:25.569 "auth": { 00:10:25.569 "state": "completed", 00:10:25.569 "digest": "sha256", 00:10:25.569 "dhgroup": "null" 00:10:25.569 } 00:10:25.569 } 00:10:25.569 ]' 00:10:25.569 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:25.828 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:25.828 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:25.828 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:25.828 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:25.828 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:25.828 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:25.828 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.086 18:31:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:10:26.652 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:26.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:26.911 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:27.478 00:10:27.478 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:27.478 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:27.478 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:27.736 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:27.736 18:31:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:27.736 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.736 18:31:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:27.736 { 00:10:27.736 "cntlid": 7, 00:10:27.736 "qid": 0, 00:10:27.736 "state": "enabled", 00:10:27.736 "listen_address": { 00:10:27.736 "trtype": "TCP", 00:10:27.736 "adrfam": "IPv4", 00:10:27.736 "traddr": "10.0.0.2", 00:10:27.736 "trsvcid": "4420" 00:10:27.736 }, 00:10:27.736 "peer_address": { 00:10:27.736 "trtype": "TCP", 00:10:27.736 "adrfam": "IPv4", 00:10:27.736 "traddr": "10.0.0.1", 00:10:27.736 "trsvcid": "44898" 00:10:27.736 }, 00:10:27.736 "auth": { 00:10:27.736 "state": "completed", 00:10:27.736 "digest": "sha256", 00:10:27.736 "dhgroup": "null" 00:10:27.736 } 00:10:27.736 } 00:10:27.736 ]' 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:27.736 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:27.994 18:31:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:28.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:28.927 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:28.928 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:28.928 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.928 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.928 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.928 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.928 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:28.928 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.186 00:10:29.186 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:29.186 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:29.186 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:29.443 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:29.443 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:29.443 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.443 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.443 18:31:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.443 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:29.443 { 00:10:29.443 "cntlid": 9, 00:10:29.443 "qid": 0, 00:10:29.443 "state": "enabled", 00:10:29.443 "listen_address": { 00:10:29.443 "trtype": "TCP", 00:10:29.443 "adrfam": "IPv4", 00:10:29.443 "traddr": "10.0.0.2", 00:10:29.443 "trsvcid": "4420" 00:10:29.443 }, 00:10:29.443 "peer_address": { 00:10:29.443 "trtype": "TCP", 00:10:29.443 "adrfam": "IPv4", 00:10:29.443 "traddr": "10.0.0.1", 00:10:29.443 "trsvcid": "44934" 00:10:29.443 }, 00:10:29.443 "auth": { 00:10:29.443 "state": "completed", 00:10:29.443 "digest": "sha256", 00:10:29.443 "dhgroup": "ffdhe2048" 00:10:29.443 } 00:10:29.443 } 00:10:29.443 ]' 00:10:29.443 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:29.769 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:29.769 18:31:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:29.769 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:29.769 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:29.769 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:29.769 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:29.769 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.027 18:31:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:30.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:30.594 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:30.852 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.418 00:10:31.418 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:31.418 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:31.418 18:31:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:31.676 { 00:10:31.676 "cntlid": 11, 00:10:31.676 "qid": 0, 00:10:31.676 "state": "enabled", 00:10:31.676 "listen_address": { 00:10:31.676 "trtype": "TCP", 00:10:31.676 "adrfam": "IPv4", 00:10:31.676 "traddr": "10.0.0.2", 00:10:31.676 "trsvcid": "4420" 00:10:31.676 }, 00:10:31.676 "peer_address": { 00:10:31.676 "trtype": "TCP", 00:10:31.676 "adrfam": "IPv4", 00:10:31.676 "traddr": "10.0.0.1", 00:10:31.676 "trsvcid": "44974" 00:10:31.676 }, 00:10:31.676 "auth": { 00:10:31.676 "state": "completed", 00:10:31.676 "digest": "sha256", 00:10:31.676 "dhgroup": "ffdhe2048" 00:10:31.676 } 00:10:31.676 } 00:10:31.676 ]' 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:31.676 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:31.935 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:31.935 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:31.935 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:32.193 18:31:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:32.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:32.759 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.017 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.275 00:10:33.275 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.275 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.275 18:31:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.841 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.841 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.841 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.841 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.841 18:31:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:33.842 { 00:10:33.842 "cntlid": 13, 00:10:33.842 "qid": 0, 00:10:33.842 "state": "enabled", 00:10:33.842 "listen_address": { 00:10:33.842 "trtype": "TCP", 00:10:33.842 "adrfam": "IPv4", 00:10:33.842 "traddr": "10.0.0.2", 00:10:33.842 "trsvcid": "4420" 00:10:33.842 }, 00:10:33.842 "peer_address": { 00:10:33.842 "trtype": "TCP", 00:10:33.842 "adrfam": "IPv4", 00:10:33.842 "traddr": "10.0.0.1", 00:10:33.842 "trsvcid": "35600" 00:10:33.842 }, 00:10:33.842 "auth": { 00:10:33.842 "state": "completed", 00:10:33.842 "digest": "sha256", 00:10:33.842 "dhgroup": "ffdhe2048" 00:10:33.842 } 00:10:33.842 } 00:10:33.842 ]' 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.842 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.100 18:31:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:34.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:34.666 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.924 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.182 18:31:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.182 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:35.182 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:35.440 00:10:35.441 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:35.441 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:35.441 18:31:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:35.699 { 00:10:35.699 "cntlid": 15, 00:10:35.699 "qid": 0, 00:10:35.699 "state": "enabled", 00:10:35.699 "listen_address": { 00:10:35.699 "trtype": "TCP", 00:10:35.699 "adrfam": "IPv4", 00:10:35.699 "traddr": "10.0.0.2", 00:10:35.699 "trsvcid": "4420" 00:10:35.699 }, 00:10:35.699 "peer_address": { 00:10:35.699 "trtype": "TCP", 00:10:35.699 "adrfam": "IPv4", 00:10:35.699 "traddr": "10.0.0.1", 00:10:35.699 "trsvcid": "35642" 00:10:35.699 }, 00:10:35.699 "auth": { 00:10:35.699 "state": "completed", 00:10:35.699 "digest": "sha256", 00:10:35.699 "dhgroup": "ffdhe2048" 00:10:35.699 } 00:10:35.699 } 00:10:35.699 ]' 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:35.699 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:35.957 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:35.957 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:35.957 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:35.957 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:36.214 18:31:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:36.777 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.035 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:37.602 00:10:37.602 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:37.602 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:37.602 18:31:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:37.860 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:37.860 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:37.860 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.860 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.860 18:31:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.860 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:37.860 { 00:10:37.860 "cntlid": 17, 00:10:37.860 "qid": 0, 00:10:37.860 "state": "enabled", 00:10:37.860 "listen_address": { 00:10:37.860 "trtype": "TCP", 00:10:37.860 "adrfam": "IPv4", 00:10:37.860 "traddr": "10.0.0.2", 00:10:37.860 "trsvcid": "4420" 00:10:37.860 }, 00:10:37.860 "peer_address": { 00:10:37.860 "trtype": "TCP", 00:10:37.860 "adrfam": "IPv4", 00:10:37.860 "traddr": "10.0.0.1", 00:10:37.860 "trsvcid": "35666" 00:10:37.860 }, 00:10:37.860 "auth": { 00:10:37.860 "state": "completed", 00:10:37.860 "digest": "sha256", 00:10:37.860 "dhgroup": "ffdhe3072" 00:10:37.860 } 00:10:37.860 } 00:10:37.860 ]' 00:10:37.861 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:37.861 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:37.861 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:37.861 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:37.861 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:38.119 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:38.119 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:38.119 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:38.377 18:31:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:38.944 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.202 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.769 00:10:39.769 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.769 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.769 18:31:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.769 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.769 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.769 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.769 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.769 18:31:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.769 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.769 { 00:10:39.769 "cntlid": 19, 00:10:39.769 "qid": 0, 00:10:39.769 "state": "enabled", 00:10:39.769 "listen_address": { 00:10:39.769 "trtype": "TCP", 00:10:39.769 "adrfam": "IPv4", 00:10:39.769 "traddr": "10.0.0.2", 00:10:39.769 "trsvcid": "4420" 00:10:39.769 }, 00:10:39.769 "peer_address": { 00:10:39.769 "trtype": "TCP", 00:10:39.769 "adrfam": "IPv4", 00:10:39.769 "traddr": "10.0.0.1", 00:10:39.769 "trsvcid": "35694" 00:10:39.769 }, 00:10:39.769 "auth": { 00:10:39.769 "state": "completed", 00:10:39.769 "digest": "sha256", 00:10:39.769 "dhgroup": "ffdhe3072" 00:10:39.769 } 00:10:39.769 } 00:10:39.769 ]' 00:10:39.769 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:40.027 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:40.027 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:40.027 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:40.027 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:40.027 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:40.027 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:40.027 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:40.293 18:31:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:10:40.868 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:41.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.128 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:41.696 00:10:41.696 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:41.696 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.696 18:31:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.696 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.696 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.696 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.696 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.696 18:31:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.696 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.696 { 00:10:41.696 "cntlid": 21, 00:10:41.696 "qid": 0, 00:10:41.696 "state": "enabled", 00:10:41.696 "listen_address": { 00:10:41.696 "trtype": "TCP", 00:10:41.696 "adrfam": "IPv4", 00:10:41.696 "traddr": "10.0.0.2", 00:10:41.696 "trsvcid": "4420" 00:10:41.696 }, 00:10:41.696 "peer_address": { 00:10:41.696 "trtype": "TCP", 00:10:41.696 "adrfam": "IPv4", 00:10:41.696 "traddr": "10.0.0.1", 00:10:41.696 "trsvcid": "35728" 00:10:41.696 }, 00:10:41.696 "auth": { 00:10:41.696 "state": "completed", 00:10:41.696 "digest": "sha256", 00:10:41.696 "dhgroup": "ffdhe3072" 00:10:41.696 } 00:10:41.696 } 00:10:41.696 ]' 00:10:41.696 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.954 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.954 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.954 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:41.954 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.954 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.954 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.954 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:42.254 18:31:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:43.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:43.200 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:43.458 00:10:43.458 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:43.458 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.458 18:31:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.715 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.715 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.715 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.715 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.973 { 00:10:43.973 "cntlid": 23, 00:10:43.973 "qid": 0, 00:10:43.973 "state": "enabled", 00:10:43.973 "listen_address": { 00:10:43.973 "trtype": "TCP", 00:10:43.973 "adrfam": "IPv4", 00:10:43.973 "traddr": "10.0.0.2", 00:10:43.973 "trsvcid": "4420" 00:10:43.973 }, 00:10:43.973 "peer_address": { 00:10:43.973 "trtype": "TCP", 00:10:43.973 "adrfam": "IPv4", 00:10:43.973 "traddr": "10.0.0.1", 00:10:43.973 "trsvcid": "59326" 00:10:43.973 }, 00:10:43.973 "auth": { 00:10:43.973 "state": "completed", 00:10:43.973 "digest": "sha256", 00:10:43.973 "dhgroup": "ffdhe3072" 00:10:43.973 } 00:10:43.973 } 00:10:43.973 ]' 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.973 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:44.232 18:31:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:45.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.166 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:45.733 00:10:45.733 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:45.733 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:45.733 18:31:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:45.991 { 00:10:45.991 "cntlid": 25, 00:10:45.991 "qid": 0, 00:10:45.991 "state": "enabled", 00:10:45.991 "listen_address": { 00:10:45.991 "trtype": "TCP", 00:10:45.991 "adrfam": "IPv4", 00:10:45.991 "traddr": "10.0.0.2", 00:10:45.991 "trsvcid": "4420" 00:10:45.991 }, 00:10:45.991 "peer_address": { 00:10:45.991 "trtype": "TCP", 00:10:45.991 "adrfam": "IPv4", 00:10:45.991 "traddr": "10.0.0.1", 00:10:45.991 "trsvcid": "59352" 00:10:45.991 }, 00:10:45.991 "auth": { 00:10:45.991 "state": "completed", 00:10:45.991 "digest": "sha256", 00:10:45.991 "dhgroup": "ffdhe4096" 00:10:45.991 } 00:10:45.991 } 00:10:45.991 ]' 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.991 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:46.249 18:31:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:47.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.182 18:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.630 00:10:47.630 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.630 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.630 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.894 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.894 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.894 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.894 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.894 18:32:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.894 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.894 { 00:10:47.894 "cntlid": 27, 00:10:47.894 "qid": 0, 00:10:47.894 "state": "enabled", 00:10:47.894 "listen_address": { 00:10:47.894 "trtype": "TCP", 00:10:47.894 "adrfam": "IPv4", 00:10:47.894 "traddr": "10.0.0.2", 00:10:47.894 "trsvcid": "4420" 00:10:47.894 }, 00:10:47.894 "peer_address": { 00:10:47.894 "trtype": "TCP", 00:10:47.894 "adrfam": "IPv4", 00:10:47.894 "traddr": "10.0.0.1", 00:10:47.894 "trsvcid": "59366" 00:10:47.894 }, 00:10:47.894 "auth": { 00:10:47.894 "state": "completed", 00:10:47.894 "digest": "sha256", 00:10:47.894 "dhgroup": "ffdhe4096" 00:10:47.894 } 00:10:47.894 } 00:10:47.894 ]' 00:10:47.894 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:48.161 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:48.161 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:48.161 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:48.161 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:48.161 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:48.161 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:48.161 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:48.419 18:32:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:49.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.355 18:32:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.991 00:10:49.991 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.991 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.991 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:50.288 { 00:10:50.288 "cntlid": 29, 00:10:50.288 "qid": 0, 00:10:50.288 "state": "enabled", 00:10:50.288 "listen_address": { 00:10:50.288 "trtype": "TCP", 00:10:50.288 "adrfam": "IPv4", 00:10:50.288 "traddr": "10.0.0.2", 00:10:50.288 "trsvcid": "4420" 00:10:50.288 }, 00:10:50.288 "peer_address": { 00:10:50.288 "trtype": "TCP", 00:10:50.288 "adrfam": "IPv4", 00:10:50.288 "traddr": "10.0.0.1", 00:10:50.288 "trsvcid": "59400" 00:10:50.288 }, 00:10:50.288 "auth": { 00:10:50.288 "state": "completed", 00:10:50.288 "digest": "sha256", 00:10:50.288 "dhgroup": "ffdhe4096" 00:10:50.288 } 00:10:50.288 } 00:10:50.288 ]' 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:50.288 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:50.547 18:32:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:51.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:51.113 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.679 18:32:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:51.937 00:10:51.937 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:51.937 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.937 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:52.194 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:52.194 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:52.194 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:52.195 { 00:10:52.195 "cntlid": 31, 00:10:52.195 "qid": 0, 00:10:52.195 "state": "enabled", 00:10:52.195 "listen_address": { 00:10:52.195 "trtype": "TCP", 00:10:52.195 "adrfam": "IPv4", 00:10:52.195 "traddr": "10.0.0.2", 00:10:52.195 "trsvcid": "4420" 00:10:52.195 }, 00:10:52.195 "peer_address": { 00:10:52.195 "trtype": "TCP", 00:10:52.195 "adrfam": "IPv4", 00:10:52.195 "traddr": "10.0.0.1", 00:10:52.195 "trsvcid": "59432" 00:10:52.195 }, 00:10:52.195 "auth": { 00:10:52.195 "state": "completed", 00:10:52.195 "digest": "sha256", 00:10:52.195 "dhgroup": "ffdhe4096" 00:10:52.195 } 00:10:52.195 } 00:10:52.195 ]' 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:52.195 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:52.453 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:52.453 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:52.453 18:32:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.711 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:53.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:53.277 18:32:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.534 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.535 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:54.100 00:10:54.100 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:54.100 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:54.100 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:54.358 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:54.358 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:54.358 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.358 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.358 18:32:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.358 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:54.358 { 00:10:54.358 "cntlid": 33, 00:10:54.358 "qid": 0, 00:10:54.358 "state": "enabled", 00:10:54.358 "listen_address": { 00:10:54.358 "trtype": "TCP", 00:10:54.358 "adrfam": "IPv4", 00:10:54.358 "traddr": "10.0.0.2", 00:10:54.358 "trsvcid": "4420" 00:10:54.358 }, 00:10:54.358 "peer_address": { 00:10:54.358 "trtype": "TCP", 00:10:54.358 "adrfam": "IPv4", 00:10:54.358 "traddr": "10.0.0.1", 00:10:54.358 "trsvcid": "37648" 00:10:54.358 }, 00:10:54.358 "auth": { 00:10:54.358 "state": "completed", 00:10:54.358 "digest": "sha256", 00:10:54.358 "dhgroup": "ffdhe6144" 00:10:54.358 } 00:10:54.358 } 00:10:54.358 ]' 00:10:54.358 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:54.616 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:54.616 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:54.616 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:54.616 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:54.616 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:54.616 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:54.616 18:32:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:54.875 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:55.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:55.441 18:32:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.699 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:56.265 00:10:56.265 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:56.265 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:56.265 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:56.523 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:56.523 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:56.523 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.523 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.523 18:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.523 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:56.523 { 00:10:56.523 "cntlid": 35, 00:10:56.523 "qid": 0, 00:10:56.523 "state": "enabled", 00:10:56.523 "listen_address": { 00:10:56.523 "trtype": "TCP", 00:10:56.523 "adrfam": "IPv4", 00:10:56.523 "traddr": "10.0.0.2", 00:10:56.523 "trsvcid": "4420" 00:10:56.523 }, 00:10:56.523 "peer_address": { 00:10:56.523 "trtype": "TCP", 00:10:56.523 "adrfam": "IPv4", 00:10:56.523 "traddr": "10.0.0.1", 00:10:56.523 "trsvcid": "37670" 00:10:56.523 }, 00:10:56.523 "auth": { 00:10:56.523 "state": "completed", 00:10:56.523 "digest": "sha256", 00:10:56.523 "dhgroup": "ffdhe6144" 00:10:56.523 } 00:10:56.523 } 00:10:56.523 ]' 00:10:56.523 18:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:56.523 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:56.523 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:56.781 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:56.781 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:56.781 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:56.781 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:56.781 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:57.039 18:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.972 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:58.538 00:10:58.538 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:58.538 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:58.538 18:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:58.796 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:58.796 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:58.796 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.796 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.796 18:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.796 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:58.796 { 00:10:58.796 "cntlid": 37, 00:10:58.796 "qid": 0, 00:10:58.796 "state": "enabled", 00:10:58.796 "listen_address": { 00:10:58.796 "trtype": "TCP", 00:10:58.796 "adrfam": "IPv4", 00:10:58.796 "traddr": "10.0.0.2", 00:10:58.796 "trsvcid": "4420" 00:10:58.796 }, 00:10:58.796 "peer_address": { 00:10:58.796 "trtype": "TCP", 00:10:58.796 "adrfam": "IPv4", 00:10:58.796 "traddr": "10.0.0.1", 00:10:58.796 "trsvcid": "37690" 00:10:58.796 }, 00:10:58.796 "auth": { 00:10:58.796 "state": "completed", 00:10:58.796 "digest": "sha256", 00:10:58.796 "dhgroup": "ffdhe6144" 00:10:58.796 } 00:10:58.796 } 00:10:58.796 ]' 00:10:58.796 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:58.797 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:58.797 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:58.797 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:58.797 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.054 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.054 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.054 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.055 18:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:00.013 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:00.274 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:00.274 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:00.275 18:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:00.841 00:11:00.841 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:00.841 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:00.841 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.099 { 00:11:01.099 "cntlid": 39, 00:11:01.099 "qid": 0, 00:11:01.099 "state": "enabled", 00:11:01.099 "listen_address": { 00:11:01.099 "trtype": "TCP", 00:11:01.099 "adrfam": "IPv4", 00:11:01.099 "traddr": "10.0.0.2", 00:11:01.099 "trsvcid": "4420" 00:11:01.099 }, 00:11:01.099 "peer_address": { 00:11:01.099 "trtype": "TCP", 00:11:01.099 "adrfam": "IPv4", 00:11:01.099 "traddr": "10.0.0.1", 00:11:01.099 "trsvcid": "37720" 00:11:01.099 }, 00:11:01.099 "auth": { 00:11:01.099 "state": "completed", 00:11:01.099 "digest": "sha256", 00:11:01.099 "dhgroup": "ffdhe6144" 00:11:01.099 } 00:11:01.099 } 00:11:01.099 ]' 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.099 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:01.358 18:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:01.926 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:02.491 18:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:03.073 00:11:03.073 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.073 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.073 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.332 { 00:11:03.332 "cntlid": 41, 00:11:03.332 "qid": 0, 00:11:03.332 "state": "enabled", 00:11:03.332 "listen_address": { 00:11:03.332 "trtype": "TCP", 00:11:03.332 "adrfam": "IPv4", 00:11:03.332 "traddr": "10.0.0.2", 00:11:03.332 "trsvcid": "4420" 00:11:03.332 }, 00:11:03.332 "peer_address": { 00:11:03.332 "trtype": "TCP", 00:11:03.332 "adrfam": "IPv4", 00:11:03.332 "traddr": "10.0.0.1", 00:11:03.332 "trsvcid": "46832" 00:11:03.332 }, 00:11:03.332 "auth": { 00:11:03.332 "state": "completed", 00:11:03.332 "digest": "sha256", 00:11:03.332 "dhgroup": "ffdhe8192" 00:11:03.332 } 00:11:03.332 } 00:11:03.332 ]' 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:03.332 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.590 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.590 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.590 18:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:03.848 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:04.436 18:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:04.707 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:05.643 00:11:05.643 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.643 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.643 18:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.643 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.643 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.643 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.643 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.643 18:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.643 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.643 { 00:11:05.643 "cntlid": 43, 00:11:05.643 "qid": 0, 00:11:05.643 "state": "enabled", 00:11:05.643 "listen_address": { 00:11:05.643 "trtype": "TCP", 00:11:05.643 "adrfam": "IPv4", 00:11:05.643 "traddr": "10.0.0.2", 00:11:05.643 "trsvcid": "4420" 00:11:05.643 }, 00:11:05.643 "peer_address": { 00:11:05.643 "trtype": "TCP", 00:11:05.643 "adrfam": "IPv4", 00:11:05.643 "traddr": "10.0.0.1", 00:11:05.643 "trsvcid": "46868" 00:11:05.643 }, 00:11:05.643 "auth": { 00:11:05.643 "state": "completed", 00:11:05.643 "digest": "sha256", 00:11:05.643 "dhgroup": "ffdhe8192" 00:11:05.643 } 00:11:05.643 } 00:11:05.643 ]' 00:11:05.643 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.903 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.903 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.903 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:05.903 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.903 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.903 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.903 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:06.161 18:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:11:07.096 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:07.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:07.097 18:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.032 00:11:08.032 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:08.032 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:08.032 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:08.291 { 00:11:08.291 "cntlid": 45, 00:11:08.291 "qid": 0, 00:11:08.291 "state": "enabled", 00:11:08.291 "listen_address": { 00:11:08.291 "trtype": "TCP", 00:11:08.291 "adrfam": "IPv4", 00:11:08.291 "traddr": "10.0.0.2", 00:11:08.291 "trsvcid": "4420" 00:11:08.291 }, 00:11:08.291 "peer_address": { 00:11:08.291 "trtype": "TCP", 00:11:08.291 "adrfam": "IPv4", 00:11:08.291 "traddr": "10.0.0.1", 00:11:08.291 "trsvcid": "46900" 00:11:08.291 }, 00:11:08.291 "auth": { 00:11:08.291 "state": "completed", 00:11:08.291 "digest": "sha256", 00:11:08.291 "dhgroup": "ffdhe8192" 00:11:08.291 } 00:11:08.291 } 00:11:08.291 ]' 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:08.291 18:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.859 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:09.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:09.427 18:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:09.684 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:09.684 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.684 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:09.684 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:09.685 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:10.255 00:11:10.255 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:10.255 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:10.255 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:10.529 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.529 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.529 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.529 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.529 18:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.529 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:10.529 { 00:11:10.529 "cntlid": 47, 00:11:10.529 "qid": 0, 00:11:10.529 "state": "enabled", 00:11:10.529 "listen_address": { 00:11:10.529 "trtype": "TCP", 00:11:10.529 "adrfam": "IPv4", 00:11:10.529 "traddr": "10.0.0.2", 00:11:10.529 "trsvcid": "4420" 00:11:10.529 }, 00:11:10.529 "peer_address": { 00:11:10.529 "trtype": "TCP", 00:11:10.529 "adrfam": "IPv4", 00:11:10.529 "traddr": "10.0.0.1", 00:11:10.529 "trsvcid": "46932" 00:11:10.529 }, 00:11:10.529 "auth": { 00:11:10.529 "state": "completed", 00:11:10.529 "digest": "sha256", 00:11:10.529 "dhgroup": "ffdhe8192" 00:11:10.529 } 00:11:10.529 } 00:11:10.529 ]' 00:11:10.529 18:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:10.821 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:10.821 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:10.821 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:10.821 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.821 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.821 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.821 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.080 18:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:11.644 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:11.902 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:12.159 00:11:12.418 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.418 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:12.418 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.676 { 00:11:12.676 "cntlid": 49, 00:11:12.676 "qid": 0, 00:11:12.676 "state": "enabled", 00:11:12.676 "listen_address": { 00:11:12.676 "trtype": "TCP", 00:11:12.676 "adrfam": "IPv4", 00:11:12.676 "traddr": "10.0.0.2", 00:11:12.676 "trsvcid": "4420" 00:11:12.676 }, 00:11:12.676 "peer_address": { 00:11:12.676 "trtype": "TCP", 00:11:12.676 "adrfam": "IPv4", 00:11:12.676 "traddr": "10.0.0.1", 00:11:12.676 "trsvcid": "46954" 00:11:12.676 }, 00:11:12.676 "auth": { 00:11:12.676 "state": "completed", 00:11:12.676 "digest": "sha384", 00:11:12.676 "dhgroup": "null" 00:11:12.676 } 00:11:12.676 } 00:11:12.676 ]' 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.676 18:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.676 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:12.676 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.676 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.676 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.676 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.935 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:11:13.503 18:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.762 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.021 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.021 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.021 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:14.279 00:11:14.280 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.280 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.280 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.538 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.538 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.538 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.538 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.538 18:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.538 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.538 { 00:11:14.538 "cntlid": 51, 00:11:14.538 "qid": 0, 00:11:14.538 "state": "enabled", 00:11:14.538 "listen_address": { 00:11:14.538 "trtype": "TCP", 00:11:14.538 "adrfam": "IPv4", 00:11:14.538 "traddr": "10.0.0.2", 00:11:14.538 "trsvcid": "4420" 00:11:14.538 }, 00:11:14.538 "peer_address": { 00:11:14.538 "trtype": "TCP", 00:11:14.538 "adrfam": "IPv4", 00:11:14.538 "traddr": "10.0.0.1", 00:11:14.538 "trsvcid": "34788" 00:11:14.538 }, 00:11:14.538 "auth": { 00:11:14.538 "state": "completed", 00:11:14.538 "digest": "sha384", 00:11:14.538 "dhgroup": "null" 00:11:14.538 } 00:11:14.538 } 00:11:14.538 ]' 00:11:14.538 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.539 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:14.539 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.539 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:14.539 18:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.539 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.539 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.539 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.797 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:15.733 18:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:15.991 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:16.249 00:11:16.249 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.249 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.249 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.508 { 00:11:16.508 "cntlid": 53, 00:11:16.508 "qid": 0, 00:11:16.508 "state": "enabled", 00:11:16.508 "listen_address": { 00:11:16.508 "trtype": "TCP", 00:11:16.508 "adrfam": "IPv4", 00:11:16.508 "traddr": "10.0.0.2", 00:11:16.508 "trsvcid": "4420" 00:11:16.508 }, 00:11:16.508 "peer_address": { 00:11:16.508 "trtype": "TCP", 00:11:16.508 "adrfam": "IPv4", 00:11:16.508 "traddr": "10.0.0.1", 00:11:16.508 "trsvcid": "34812" 00:11:16.508 }, 00:11:16.508 "auth": { 00:11:16.508 "state": "completed", 00:11:16.508 "digest": "sha384", 00:11:16.508 "dhgroup": "null" 00:11:16.508 } 00:11:16.508 } 00:11:16.508 ]' 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:16.508 18:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.766 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:16.767 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.767 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.767 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.767 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.040 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:11:17.609 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.609 18:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:17.609 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.609 18:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.609 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.609 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.609 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:17.609 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:17.868 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:18.126 00:11:18.126 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.126 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.126 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:18.384 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.384 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.384 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.384 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.643 18:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.643 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:18.643 { 00:11:18.643 "cntlid": 55, 00:11:18.643 "qid": 0, 00:11:18.643 "state": "enabled", 00:11:18.643 "listen_address": { 00:11:18.643 "trtype": "TCP", 00:11:18.643 "adrfam": "IPv4", 00:11:18.643 "traddr": "10.0.0.2", 00:11:18.643 "trsvcid": "4420" 00:11:18.643 }, 00:11:18.643 "peer_address": { 00:11:18.643 "trtype": "TCP", 00:11:18.643 "adrfam": "IPv4", 00:11:18.643 "traddr": "10.0.0.1", 00:11:18.643 "trsvcid": "34838" 00:11:18.643 }, 00:11:18.643 "auth": { 00:11:18.643 "state": "completed", 00:11:18.643 "digest": "sha384", 00:11:18.643 "dhgroup": "null" 00:11:18.643 } 00:11:18.643 } 00:11:18.643 ]' 00:11:18.643 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:18.643 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:18.643 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:18.643 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:18.643 18:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:18.643 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.643 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.643 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.902 18:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.838 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.095 18:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.095 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.095 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:20.353 00:11:20.353 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:20.353 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.353 18:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:20.612 { 00:11:20.612 "cntlid": 57, 00:11:20.612 "qid": 0, 00:11:20.612 "state": "enabled", 00:11:20.612 "listen_address": { 00:11:20.612 "trtype": "TCP", 00:11:20.612 "adrfam": "IPv4", 00:11:20.612 "traddr": "10.0.0.2", 00:11:20.612 "trsvcid": "4420" 00:11:20.612 }, 00:11:20.612 "peer_address": { 00:11:20.612 "trtype": "TCP", 00:11:20.612 "adrfam": "IPv4", 00:11:20.612 "traddr": "10.0.0.1", 00:11:20.612 "trsvcid": "34870" 00:11:20.612 }, 00:11:20.612 "auth": { 00:11:20.612 "state": "completed", 00:11:20.612 "digest": "sha384", 00:11:20.612 "dhgroup": "ffdhe2048" 00:11:20.612 } 00:11:20.612 } 00:11:20.612 ]' 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:20.612 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:20.871 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.871 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:20.871 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.871 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.871 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.130 18:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:22.065 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.066 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:22.324 00:11:22.324 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:22.324 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.324 18:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:22.582 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.582 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.582 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.582 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.582 18:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.582 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:22.582 { 00:11:22.582 "cntlid": 59, 00:11:22.582 "qid": 0, 00:11:22.582 "state": "enabled", 00:11:22.582 "listen_address": { 00:11:22.582 "trtype": "TCP", 00:11:22.582 "adrfam": "IPv4", 00:11:22.582 "traddr": "10.0.0.2", 00:11:22.582 "trsvcid": "4420" 00:11:22.582 }, 00:11:22.582 "peer_address": { 00:11:22.582 "trtype": "TCP", 00:11:22.582 "adrfam": "IPv4", 00:11:22.582 "traddr": "10.0.0.1", 00:11:22.582 "trsvcid": "34896" 00:11:22.582 }, 00:11:22.582 "auth": { 00:11:22.582 "state": "completed", 00:11:22.582 "digest": "sha384", 00:11:22.582 "dhgroup": "ffdhe2048" 00:11:22.582 } 00:11:22.582 } 00:11:22.582 ]' 00:11:22.582 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:22.840 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:22.840 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:22.840 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.840 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:22.840 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.840 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.840 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.098 18:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.087 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:24.346 00:11:24.346 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:24.346 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:24.346 18:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:24.604 { 00:11:24.604 "cntlid": 61, 00:11:24.604 "qid": 0, 00:11:24.604 "state": "enabled", 00:11:24.604 "listen_address": { 00:11:24.604 "trtype": "TCP", 00:11:24.604 "adrfam": "IPv4", 00:11:24.604 "traddr": "10.0.0.2", 00:11:24.604 "trsvcid": "4420" 00:11:24.604 }, 00:11:24.604 "peer_address": { 00:11:24.604 "trtype": "TCP", 00:11:24.604 "adrfam": "IPv4", 00:11:24.604 "traddr": "10.0.0.1", 00:11:24.604 "trsvcid": "47546" 00:11:24.604 }, 00:11:24.604 "auth": { 00:11:24.604 "state": "completed", 00:11:24.604 "digest": "sha384", 00:11:24.604 "dhgroup": "ffdhe2048" 00:11:24.604 } 00:11:24.604 } 00:11:24.604 ]' 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:24.604 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:24.863 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:24.863 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:24.863 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.863 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.863 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.123 18:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:25.691 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.950 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:26.518 00:11:26.519 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:26.519 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.519 18:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:26.778 { 00:11:26.778 "cntlid": 63, 00:11:26.778 "qid": 0, 00:11:26.778 "state": "enabled", 00:11:26.778 "listen_address": { 00:11:26.778 "trtype": "TCP", 00:11:26.778 "adrfam": "IPv4", 00:11:26.778 "traddr": "10.0.0.2", 00:11:26.778 "trsvcid": "4420" 00:11:26.778 }, 00:11:26.778 "peer_address": { 00:11:26.778 "trtype": "TCP", 00:11:26.778 "adrfam": "IPv4", 00:11:26.778 "traddr": "10.0.0.1", 00:11:26.778 "trsvcid": "47576" 00:11:26.778 }, 00:11:26.778 "auth": { 00:11:26.778 "state": "completed", 00:11:26.778 "digest": "sha384", 00:11:26.778 "dhgroup": "ffdhe2048" 00:11:26.778 } 00:11:26.778 } 00:11:26.778 ]' 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.778 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:27.037 18:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:27.973 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.231 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:28.490 00:11:28.490 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:28.490 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:28.490 18:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.749 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.749 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.749 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.749 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.749 18:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.749 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:28.749 { 00:11:28.749 "cntlid": 65, 00:11:28.749 "qid": 0, 00:11:28.749 "state": "enabled", 00:11:28.749 "listen_address": { 00:11:28.749 "trtype": "TCP", 00:11:28.749 "adrfam": "IPv4", 00:11:28.749 "traddr": "10.0.0.2", 00:11:28.749 "trsvcid": "4420" 00:11:28.749 }, 00:11:28.749 "peer_address": { 00:11:28.749 "trtype": "TCP", 00:11:28.749 "adrfam": "IPv4", 00:11:28.749 "traddr": "10.0.0.1", 00:11:28.749 "trsvcid": "47606" 00:11:28.750 }, 00:11:28.750 "auth": { 00:11:28.750 "state": "completed", 00:11:28.750 "digest": "sha384", 00:11:28.750 "dhgroup": "ffdhe3072" 00:11:28.750 } 00:11:28.750 } 00:11:28.750 ]' 00:11:28.750 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.009 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.009 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.009 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:29.009 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.009 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.009 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.009 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.267 18:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.207 18:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:30.775 00:11:30.775 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.775 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.775 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.033 { 00:11:31.033 "cntlid": 67, 00:11:31.033 "qid": 0, 00:11:31.033 "state": "enabled", 00:11:31.033 "listen_address": { 00:11:31.033 "trtype": "TCP", 00:11:31.033 "adrfam": "IPv4", 00:11:31.033 "traddr": "10.0.0.2", 00:11:31.033 "trsvcid": "4420" 00:11:31.033 }, 00:11:31.033 "peer_address": { 00:11:31.033 "trtype": "TCP", 00:11:31.033 "adrfam": "IPv4", 00:11:31.033 "traddr": "10.0.0.1", 00:11:31.033 "trsvcid": "47634" 00:11:31.033 }, 00:11:31.033 "auth": { 00:11:31.033 "state": "completed", 00:11:31.033 "digest": "sha384", 00:11:31.033 "dhgroup": "ffdhe3072" 00:11:31.033 } 00:11:31.033 } 00:11:31.033 ]' 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.033 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.291 18:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:32.224 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.483 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.484 18:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:32.742 00:11:32.742 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:32.742 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:32.742 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.000 { 00:11:33.000 "cntlid": 69, 00:11:33.000 "qid": 0, 00:11:33.000 "state": "enabled", 00:11:33.000 "listen_address": { 00:11:33.000 "trtype": "TCP", 00:11:33.000 "adrfam": "IPv4", 00:11:33.000 "traddr": "10.0.0.2", 00:11:33.000 "trsvcid": "4420" 00:11:33.000 }, 00:11:33.000 "peer_address": { 00:11:33.000 "trtype": "TCP", 00:11:33.000 "adrfam": "IPv4", 00:11:33.000 "traddr": "10.0.0.1", 00:11:33.000 "trsvcid": "57872" 00:11:33.000 }, 00:11:33.000 "auth": { 00:11:33.000 "state": "completed", 00:11:33.000 "digest": "sha384", 00:11:33.000 "dhgroup": "ffdhe3072" 00:11:33.000 } 00:11:33.000 } 00:11:33.000 ]' 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.000 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.259 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:33.259 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.259 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.259 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.259 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.517 18:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:11:34.452 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.452 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:34.452 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.452 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.452 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.452 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:34.453 18:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:35.019 00:11:35.019 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.019 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.019 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.278 { 00:11:35.278 "cntlid": 71, 00:11:35.278 "qid": 0, 00:11:35.278 "state": "enabled", 00:11:35.278 "listen_address": { 00:11:35.278 "trtype": "TCP", 00:11:35.278 "adrfam": "IPv4", 00:11:35.278 "traddr": "10.0.0.2", 00:11:35.278 "trsvcid": "4420" 00:11:35.278 }, 00:11:35.278 "peer_address": { 00:11:35.278 "trtype": "TCP", 00:11:35.278 "adrfam": "IPv4", 00:11:35.278 "traddr": "10.0.0.1", 00:11:35.278 "trsvcid": "57898" 00:11:35.278 }, 00:11:35.278 "auth": { 00:11:35.278 "state": "completed", 00:11:35.278 "digest": "sha384", 00:11:35.278 "dhgroup": "ffdhe3072" 00:11:35.278 } 00:11:35.278 } 00:11:35.278 ]' 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.278 18:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.536 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.469 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:36.470 18:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.035 00:11:37.035 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.035 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.035 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.318 { 00:11:37.318 "cntlid": 73, 00:11:37.318 "qid": 0, 00:11:37.318 "state": "enabled", 00:11:37.318 "listen_address": { 00:11:37.318 "trtype": "TCP", 00:11:37.318 "adrfam": "IPv4", 00:11:37.318 "traddr": "10.0.0.2", 00:11:37.318 "trsvcid": "4420" 00:11:37.318 }, 00:11:37.318 "peer_address": { 00:11:37.318 "trtype": "TCP", 00:11:37.318 "adrfam": "IPv4", 00:11:37.318 "traddr": "10.0.0.1", 00:11:37.318 "trsvcid": "57922" 00:11:37.318 }, 00:11:37.318 "auth": { 00:11:37.318 "state": "completed", 00:11:37.318 "digest": "sha384", 00:11:37.318 "dhgroup": "ffdhe4096" 00:11:37.318 } 00:11:37.318 } 00:11:37.318 ]' 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.318 18:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.577 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:38.510 18:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.510 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.769 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.769 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:38.769 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.027 00:11:39.027 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.027 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.027 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.285 { 00:11:39.285 "cntlid": 75, 00:11:39.285 "qid": 0, 00:11:39.285 "state": "enabled", 00:11:39.285 "listen_address": { 00:11:39.285 "trtype": "TCP", 00:11:39.285 "adrfam": "IPv4", 00:11:39.285 "traddr": "10.0.0.2", 00:11:39.285 "trsvcid": "4420" 00:11:39.285 }, 00:11:39.285 "peer_address": { 00:11:39.285 "trtype": "TCP", 00:11:39.285 "adrfam": "IPv4", 00:11:39.285 "traddr": "10.0.0.1", 00:11:39.285 "trsvcid": "57956" 00:11:39.285 }, 00:11:39.285 "auth": { 00:11:39.285 "state": "completed", 00:11:39.285 "digest": "sha384", 00:11:39.285 "dhgroup": "ffdhe4096" 00:11:39.285 } 00:11:39.285 } 00:11:39.285 ]' 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:39.285 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.543 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.543 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.543 18:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.800 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:11:40.375 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.375 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:40.375 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.376 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.376 18:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.376 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.376 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:40.376 18:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:40.635 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.201 00:11:41.201 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.201 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.201 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.460 { 00:11:41.460 "cntlid": 77, 00:11:41.460 "qid": 0, 00:11:41.460 "state": "enabled", 00:11:41.460 "listen_address": { 00:11:41.460 "trtype": "TCP", 00:11:41.460 "adrfam": "IPv4", 00:11:41.460 "traddr": "10.0.0.2", 00:11:41.460 "trsvcid": "4420" 00:11:41.460 }, 00:11:41.460 "peer_address": { 00:11:41.460 "trtype": "TCP", 00:11:41.460 "adrfam": "IPv4", 00:11:41.460 "traddr": "10.0.0.1", 00:11:41.460 "trsvcid": "57974" 00:11:41.460 }, 00:11:41.460 "auth": { 00:11:41.460 "state": "completed", 00:11:41.460 "digest": "sha384", 00:11:41.460 "dhgroup": "ffdhe4096" 00:11:41.460 } 00:11:41.460 } 00:11:41.460 ]' 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.460 18:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.027 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:42.594 18:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:42.853 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:43.111 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.369 { 00:11:43.369 "cntlid": 79, 00:11:43.369 "qid": 0, 00:11:43.369 "state": "enabled", 00:11:43.369 "listen_address": { 00:11:43.369 "trtype": "TCP", 00:11:43.369 "adrfam": "IPv4", 00:11:43.369 "traddr": "10.0.0.2", 00:11:43.369 "trsvcid": "4420" 00:11:43.369 }, 00:11:43.369 "peer_address": { 00:11:43.369 "trtype": "TCP", 00:11:43.369 "adrfam": "IPv4", 00:11:43.369 "traddr": "10.0.0.1", 00:11:43.369 "trsvcid": "33290" 00:11:43.369 }, 00:11:43.369 "auth": { 00:11:43.369 "state": "completed", 00:11:43.369 "digest": "sha384", 00:11:43.369 "dhgroup": "ffdhe4096" 00:11:43.369 } 00:11:43.369 } 00:11:43.369 ]' 00:11:43.369 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.627 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.627 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.627 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:43.627 18:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.627 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.628 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.628 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.885 18:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:44.817 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.075 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.641 00:11:45.641 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.641 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.641 18:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.898 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.898 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.898 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.898 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.898 18:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.899 { 00:11:45.899 "cntlid": 81, 00:11:45.899 "qid": 0, 00:11:45.899 "state": "enabled", 00:11:45.899 "listen_address": { 00:11:45.899 "trtype": "TCP", 00:11:45.899 "adrfam": "IPv4", 00:11:45.899 "traddr": "10.0.0.2", 00:11:45.899 "trsvcid": "4420" 00:11:45.899 }, 00:11:45.899 "peer_address": { 00:11:45.899 "trtype": "TCP", 00:11:45.899 "adrfam": "IPv4", 00:11:45.899 "traddr": "10.0.0.1", 00:11:45.899 "trsvcid": "33310" 00:11:45.899 }, 00:11:45.899 "auth": { 00:11:45.899 "state": "completed", 00:11:45.899 "digest": "sha384", 00:11:45.899 "dhgroup": "ffdhe6144" 00:11:45.899 } 00:11:45.899 } 00:11:45.899 ]' 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.899 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.163 18:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:11:47.112 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.112 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:47.112 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.112 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.112 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.113 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:47.113 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:47.113 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.371 18:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.629 00:11:47.629 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.629 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.629 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.887 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.887 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.887 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.887 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.146 18:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.146 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:48.146 { 00:11:48.146 "cntlid": 83, 00:11:48.146 "qid": 0, 00:11:48.146 "state": "enabled", 00:11:48.146 "listen_address": { 00:11:48.146 "trtype": "TCP", 00:11:48.146 "adrfam": "IPv4", 00:11:48.147 "traddr": "10.0.0.2", 00:11:48.147 "trsvcid": "4420" 00:11:48.147 }, 00:11:48.147 "peer_address": { 00:11:48.147 "trtype": "TCP", 00:11:48.147 "adrfam": "IPv4", 00:11:48.147 "traddr": "10.0.0.1", 00:11:48.147 "trsvcid": "33338" 00:11:48.147 }, 00:11:48.147 "auth": { 00:11:48.147 "state": "completed", 00:11:48.147 "digest": "sha384", 00:11:48.147 "dhgroup": "ffdhe6144" 00:11:48.147 } 00:11:48.147 } 00:11:48.147 ]' 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.147 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.406 18:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:49.341 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:49.599 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:11:49.599 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.599 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:49.599 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:49.599 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:49.599 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.599 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.600 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.600 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.600 18:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.600 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.600 18:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.166 00:11:50.166 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:50.166 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:50.166 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.423 { 00:11:50.423 "cntlid": 85, 00:11:50.423 "qid": 0, 00:11:50.423 "state": "enabled", 00:11:50.423 "listen_address": { 00:11:50.423 "trtype": "TCP", 00:11:50.423 "adrfam": "IPv4", 00:11:50.423 "traddr": "10.0.0.2", 00:11:50.423 "trsvcid": "4420" 00:11:50.423 }, 00:11:50.423 "peer_address": { 00:11:50.423 "trtype": "TCP", 00:11:50.423 "adrfam": "IPv4", 00:11:50.423 "traddr": "10.0.0.1", 00:11:50.423 "trsvcid": "33372" 00:11:50.423 }, 00:11:50.423 "auth": { 00:11:50.423 "state": "completed", 00:11:50.423 "digest": "sha384", 00:11:50.423 "dhgroup": "ffdhe6144" 00:11:50.423 } 00:11:50.423 } 00:11:50.423 ]' 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.423 18:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.682 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:51.616 18:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:51.874 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:52.132 00:11:52.391 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.391 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.391 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.650 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.650 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.650 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.650 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.651 18:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.651 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.651 { 00:11:52.651 "cntlid": 87, 00:11:52.651 "qid": 0, 00:11:52.651 "state": "enabled", 00:11:52.651 "listen_address": { 00:11:52.651 "trtype": "TCP", 00:11:52.651 "adrfam": "IPv4", 00:11:52.651 "traddr": "10.0.0.2", 00:11:52.651 "trsvcid": "4420" 00:11:52.651 }, 00:11:52.651 "peer_address": { 00:11:52.651 "trtype": "TCP", 00:11:52.651 "adrfam": "IPv4", 00:11:52.651 "traddr": "10.0.0.1", 00:11:52.651 "trsvcid": "33392" 00:11:52.651 }, 00:11:52.651 "auth": { 00:11:52.651 "state": "completed", 00:11:52.651 "digest": "sha384", 00:11:52.651 "dhgroup": "ffdhe6144" 00:11:52.651 } 00:11:52.651 } 00:11:52.651 ]' 00:11:52.651 18:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.651 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:52.651 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.651 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:52.651 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.651 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.651 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.651 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.908 18:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:53.843 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.101 18:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.669 00:11:54.669 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.669 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.669 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.928 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.928 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.928 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.928 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.187 18:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.187 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.187 { 00:11:55.187 "cntlid": 89, 00:11:55.187 "qid": 0, 00:11:55.187 "state": "enabled", 00:11:55.187 "listen_address": { 00:11:55.187 "trtype": "TCP", 00:11:55.187 "adrfam": "IPv4", 00:11:55.187 "traddr": "10.0.0.2", 00:11:55.187 "trsvcid": "4420" 00:11:55.187 }, 00:11:55.187 "peer_address": { 00:11:55.187 "trtype": "TCP", 00:11:55.187 "adrfam": "IPv4", 00:11:55.187 "traddr": "10.0.0.1", 00:11:55.188 "trsvcid": "37630" 00:11:55.188 }, 00:11:55.188 "auth": { 00:11:55.188 "state": "completed", 00:11:55.188 "digest": "sha384", 00:11:55.188 "dhgroup": "ffdhe8192" 00:11:55.188 } 00:11:55.188 } 00:11:55.188 ]' 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.188 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.447 18:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:56.384 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.643 18:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.210 00:11:57.210 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.210 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.210 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.468 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.468 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.468 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.468 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.468 18:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.468 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.469 { 00:11:57.469 "cntlid": 91, 00:11:57.469 "qid": 0, 00:11:57.469 "state": "enabled", 00:11:57.469 "listen_address": { 00:11:57.469 "trtype": "TCP", 00:11:57.469 "adrfam": "IPv4", 00:11:57.469 "traddr": "10.0.0.2", 00:11:57.469 "trsvcid": "4420" 00:11:57.469 }, 00:11:57.469 "peer_address": { 00:11:57.469 "trtype": "TCP", 00:11:57.469 "adrfam": "IPv4", 00:11:57.469 "traddr": "10.0.0.1", 00:11:57.469 "trsvcid": "37664" 00:11:57.469 }, 00:11:57.469 "auth": { 00:11:57.469 "state": "completed", 00:11:57.469 "digest": "sha384", 00:11:57.469 "dhgroup": "ffdhe8192" 00:11:57.469 } 00:11:57.469 } 00:11:57.469 ]' 00:11:57.469 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.469 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.469 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.469 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.469 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.768 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.768 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.768 18:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.768 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:58.726 18:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.984 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.550 00:11:59.550 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.550 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.550 18:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.808 { 00:11:59.808 "cntlid": 93, 00:11:59.808 "qid": 0, 00:11:59.808 "state": "enabled", 00:11:59.808 "listen_address": { 00:11:59.808 "trtype": "TCP", 00:11:59.808 "adrfam": "IPv4", 00:11:59.808 "traddr": "10.0.0.2", 00:11:59.808 "trsvcid": "4420" 00:11:59.808 }, 00:11:59.808 "peer_address": { 00:11:59.808 "trtype": "TCP", 00:11:59.808 "adrfam": "IPv4", 00:11:59.808 "traddr": "10.0.0.1", 00:11:59.808 "trsvcid": "37676" 00:11:59.808 }, 00:11:59.808 "auth": { 00:11:59.808 "state": "completed", 00:11:59.808 "digest": "sha384", 00:11:59.808 "dhgroup": "ffdhe8192" 00:11:59.808 } 00:11:59.808 } 00:11:59.808 ]' 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.808 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:00.067 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:00.067 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:00.067 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.067 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.067 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.325 18:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:01.257 18:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:02.191 00:12:02.191 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:02.191 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.191 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.449 { 00:12:02.449 "cntlid": 95, 00:12:02.449 "qid": 0, 00:12:02.449 "state": "enabled", 00:12:02.449 "listen_address": { 00:12:02.449 "trtype": "TCP", 00:12:02.449 "adrfam": "IPv4", 00:12:02.449 "traddr": "10.0.0.2", 00:12:02.449 "trsvcid": "4420" 00:12:02.449 }, 00:12:02.449 "peer_address": { 00:12:02.449 "trtype": "TCP", 00:12:02.449 "adrfam": "IPv4", 00:12:02.449 "traddr": "10.0.0.1", 00:12:02.449 "trsvcid": "37702" 00:12:02.449 }, 00:12:02.449 "auth": { 00:12:02.449 "state": "completed", 00:12:02.449 "digest": "sha384", 00:12:02.449 "dhgroup": "ffdhe8192" 00:12:02.449 } 00:12:02.449 } 00:12:02.449 ]' 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.449 18:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.708 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:03.640 18:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.897 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.179 00:12:04.179 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:04.179 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:04.179 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.437 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.437 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.437 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.437 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.437 18:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.437 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.437 { 00:12:04.437 "cntlid": 97, 00:12:04.437 "qid": 0, 00:12:04.437 "state": "enabled", 00:12:04.437 "listen_address": { 00:12:04.437 "trtype": "TCP", 00:12:04.437 "adrfam": "IPv4", 00:12:04.437 "traddr": "10.0.0.2", 00:12:04.437 "trsvcid": "4420" 00:12:04.437 }, 00:12:04.437 "peer_address": { 00:12:04.437 "trtype": "TCP", 00:12:04.437 "adrfam": "IPv4", 00:12:04.437 "traddr": "10.0.0.1", 00:12:04.437 "trsvcid": "39152" 00:12:04.437 }, 00:12:04.437 "auth": { 00:12:04.437 "state": "completed", 00:12:04.437 "digest": "sha512", 00:12:04.437 "dhgroup": "null" 00:12:04.437 } 00:12:04.437 } 00:12:04.437 ]' 00:12:04.437 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.699 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.699 18:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.699 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:04.699 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.699 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.699 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.699 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.957 18:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.916 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.173 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.173 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.173 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.430 00:12:06.430 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:06.430 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.430 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.688 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.688 18:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.688 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.688 18:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.688 { 00:12:06.688 "cntlid": 99, 00:12:06.688 "qid": 0, 00:12:06.688 "state": "enabled", 00:12:06.688 "listen_address": { 00:12:06.688 "trtype": "TCP", 00:12:06.688 "adrfam": "IPv4", 00:12:06.688 "traddr": "10.0.0.2", 00:12:06.688 "trsvcid": "4420" 00:12:06.688 }, 00:12:06.688 "peer_address": { 00:12:06.688 "trtype": "TCP", 00:12:06.688 "adrfam": "IPv4", 00:12:06.688 "traddr": "10.0.0.1", 00:12:06.688 "trsvcid": "39184" 00:12:06.688 }, 00:12:06.688 "auth": { 00:12:06.688 "state": "completed", 00:12:06.688 "digest": "sha512", 00:12:06.688 "dhgroup": "null" 00:12:06.688 } 00:12:06.688 } 00:12:06.688 ]' 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.688 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.945 18:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.877 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.135 18:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.135 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.135 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.393 00:12:08.393 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.393 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.393 18:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.652 { 00:12:08.652 "cntlid": 101, 00:12:08.652 "qid": 0, 00:12:08.652 "state": "enabled", 00:12:08.652 "listen_address": { 00:12:08.652 "trtype": "TCP", 00:12:08.652 "adrfam": "IPv4", 00:12:08.652 "traddr": "10.0.0.2", 00:12:08.652 "trsvcid": "4420" 00:12:08.652 }, 00:12:08.652 "peer_address": { 00:12:08.652 "trtype": "TCP", 00:12:08.652 "adrfam": "IPv4", 00:12:08.652 "traddr": "10.0.0.1", 00:12:08.652 "trsvcid": "39218" 00:12:08.652 }, 00:12:08.652 "auth": { 00:12:08.652 "state": "completed", 00:12:08.652 "digest": "sha512", 00:12:08.652 "dhgroup": "null" 00:12:08.652 } 00:12:08.652 } 00:12:08.652 ]' 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:08.652 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.911 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.911 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.911 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.176 18:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:12:09.762 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.762 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:09.762 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.762 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.762 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.762 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.763 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:09.763 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.021 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.279 18:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.279 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.279 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:10.538 00:12:10.538 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.538 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.538 18:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.796 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.796 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.796 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.796 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.796 18:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.796 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.796 { 00:12:10.796 "cntlid": 103, 00:12:10.796 "qid": 0, 00:12:10.796 "state": "enabled", 00:12:10.796 "listen_address": { 00:12:10.796 "trtype": "TCP", 00:12:10.796 "adrfam": "IPv4", 00:12:10.796 "traddr": "10.0.0.2", 00:12:10.796 "trsvcid": "4420" 00:12:10.796 }, 00:12:10.796 "peer_address": { 00:12:10.796 "trtype": "TCP", 00:12:10.796 "adrfam": "IPv4", 00:12:10.796 "traddr": "10.0.0.1", 00:12:10.796 "trsvcid": "39252" 00:12:10.796 }, 00:12:10.796 "auth": { 00:12:10.796 "state": "completed", 00:12:10.796 "digest": "sha512", 00:12:10.796 "dhgroup": "null" 00:12:10.796 } 00:12:10.796 } 00:12:10.797 ]' 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.797 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.055 18:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.992 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.560 00:12:12.560 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.561 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.561 18:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.820 { 00:12:12.820 "cntlid": 105, 00:12:12.820 "qid": 0, 00:12:12.820 "state": "enabled", 00:12:12.820 "listen_address": { 00:12:12.820 "trtype": "TCP", 00:12:12.820 "adrfam": "IPv4", 00:12:12.820 "traddr": "10.0.0.2", 00:12:12.820 "trsvcid": "4420" 00:12:12.820 }, 00:12:12.820 "peer_address": { 00:12:12.820 "trtype": "TCP", 00:12:12.820 "adrfam": "IPv4", 00:12:12.820 "traddr": "10.0.0.1", 00:12:12.820 "trsvcid": "39282" 00:12:12.820 }, 00:12:12.820 "auth": { 00:12:12.820 "state": "completed", 00:12:12.820 "digest": "sha512", 00:12:12.820 "dhgroup": "ffdhe2048" 00:12:12.820 } 00:12:12.820 } 00:12:12.820 ]' 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.820 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.388 18:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:13.957 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.216 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.475 00:12:14.475 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.475 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.475 18:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.734 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.734 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.734 18:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.734 18:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.734 18:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.734 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.734 { 00:12:14.734 "cntlid": 107, 00:12:14.734 "qid": 0, 00:12:14.734 "state": "enabled", 00:12:14.734 "listen_address": { 00:12:14.734 "trtype": "TCP", 00:12:14.734 "adrfam": "IPv4", 00:12:14.734 "traddr": "10.0.0.2", 00:12:14.734 "trsvcid": "4420" 00:12:14.734 }, 00:12:14.734 "peer_address": { 00:12:14.734 "trtype": "TCP", 00:12:14.734 "adrfam": "IPv4", 00:12:14.734 "traddr": "10.0.0.1", 00:12:14.734 "trsvcid": "51518" 00:12:14.734 }, 00:12:14.734 "auth": { 00:12:14.734 "state": "completed", 00:12:14.734 "digest": "sha512", 00:12:14.734 "dhgroup": "ffdhe2048" 00:12:14.734 } 00:12:14.734 } 00:12:14.734 ]' 00:12:14.734 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.993 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.993 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.993 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:14.993 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.993 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.993 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.993 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.252 18:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:12:15.820 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:15.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:15.821 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:15.821 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.821 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.821 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.821 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:15.821 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:15.821 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.080 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.339 00:12:16.339 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.339 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.339 18:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.906 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.906 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.906 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.906 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.906 18:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.906 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.906 { 00:12:16.906 "cntlid": 109, 00:12:16.906 "qid": 0, 00:12:16.906 "state": "enabled", 00:12:16.906 "listen_address": { 00:12:16.906 "trtype": "TCP", 00:12:16.906 "adrfam": "IPv4", 00:12:16.906 "traddr": "10.0.0.2", 00:12:16.906 "trsvcid": "4420" 00:12:16.906 }, 00:12:16.906 "peer_address": { 00:12:16.906 "trtype": "TCP", 00:12:16.906 "adrfam": "IPv4", 00:12:16.906 "traddr": "10.0.0.1", 00:12:16.906 "trsvcid": "51536" 00:12:16.906 }, 00:12:16.906 "auth": { 00:12:16.906 "state": "completed", 00:12:16.906 "digest": "sha512", 00:12:16.906 "dhgroup": "ffdhe2048" 00:12:16.906 } 00:12:16.906 } 00:12:16.906 ]' 00:12:16.906 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:16.907 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:16.907 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:16.907 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:16.907 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:16.907 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:16.907 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:16.907 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.166 18:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:17.733 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.300 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:18.558 00:12:18.559 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:18.559 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:18.559 18:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:18.818 { 00:12:18.818 "cntlid": 111, 00:12:18.818 "qid": 0, 00:12:18.818 "state": "enabled", 00:12:18.818 "listen_address": { 00:12:18.818 "trtype": "TCP", 00:12:18.818 "adrfam": "IPv4", 00:12:18.818 "traddr": "10.0.0.2", 00:12:18.818 "trsvcid": "4420" 00:12:18.818 }, 00:12:18.818 "peer_address": { 00:12:18.818 "trtype": "TCP", 00:12:18.818 "adrfam": "IPv4", 00:12:18.818 "traddr": "10.0.0.1", 00:12:18.818 "trsvcid": "51550" 00:12:18.818 }, 00:12:18.818 "auth": { 00:12:18.818 "state": "completed", 00:12:18.818 "digest": "sha512", 00:12:18.818 "dhgroup": "ffdhe2048" 00:12:18.818 } 00:12:18.818 } 00:12:18.818 ]' 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.818 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.076 18:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.012 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.324 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.584 00:12:20.584 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:20.584 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:20.584 18:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:20.848 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.848 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:20.848 18:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.848 18:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.848 18:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.848 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:20.848 { 00:12:20.848 "cntlid": 113, 00:12:20.848 "qid": 0, 00:12:20.848 "state": "enabled", 00:12:20.848 "listen_address": { 00:12:20.848 "trtype": "TCP", 00:12:20.848 "adrfam": "IPv4", 00:12:20.848 "traddr": "10.0.0.2", 00:12:20.848 "trsvcid": "4420" 00:12:20.849 }, 00:12:20.849 "peer_address": { 00:12:20.849 "trtype": "TCP", 00:12:20.849 "adrfam": "IPv4", 00:12:20.849 "traddr": "10.0.0.1", 00:12:20.849 "trsvcid": "51580" 00:12:20.849 }, 00:12:20.849 "auth": { 00:12:20.849 "state": "completed", 00:12:20.849 "digest": "sha512", 00:12:20.849 "dhgroup": "ffdhe3072" 00:12:20.849 } 00:12:20.849 } 00:12:20.849 ]' 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.849 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.432 18:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.098 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.381 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.707 00:12:22.707 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:22.707 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:22.707 18:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.033 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.033 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.034 { 00:12:23.034 "cntlid": 115, 00:12:23.034 "qid": 0, 00:12:23.034 "state": "enabled", 00:12:23.034 "listen_address": { 00:12:23.034 "trtype": "TCP", 00:12:23.034 "adrfam": "IPv4", 00:12:23.034 "traddr": "10.0.0.2", 00:12:23.034 "trsvcid": "4420" 00:12:23.034 }, 00:12:23.034 "peer_address": { 00:12:23.034 "trtype": "TCP", 00:12:23.034 "adrfam": "IPv4", 00:12:23.034 "traddr": "10.0.0.1", 00:12:23.034 "trsvcid": "46108" 00:12:23.034 }, 00:12:23.034 "auth": { 00:12:23.034 "state": "completed", 00:12:23.034 "digest": "sha512", 00:12:23.034 "dhgroup": "ffdhe3072" 00:12:23.034 } 00:12:23.034 } 00:12:23.034 ]' 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.034 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.314 18:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:12:23.881 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.881 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:23.881 18:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.881 18:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.140 18:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.140 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.140 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.140 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.399 18:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.658 00:12:24.658 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.658 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.658 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.918 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.918 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.918 18:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.918 18:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.918 18:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.918 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.918 { 00:12:24.918 "cntlid": 117, 00:12:24.918 "qid": 0, 00:12:24.918 "state": "enabled", 00:12:24.918 "listen_address": { 00:12:24.918 "trtype": "TCP", 00:12:24.918 "adrfam": "IPv4", 00:12:24.918 "traddr": "10.0.0.2", 00:12:24.918 "trsvcid": "4420" 00:12:24.918 }, 00:12:24.918 "peer_address": { 00:12:24.918 "trtype": "TCP", 00:12:24.918 "adrfam": "IPv4", 00:12:24.918 "traddr": "10.0.0.1", 00:12:24.918 "trsvcid": "46140" 00:12:24.918 }, 00:12:24.918 "auth": { 00:12:24.918 "state": "completed", 00:12:24.918 "digest": "sha512", 00:12:24.918 "dhgroup": "ffdhe3072" 00:12:24.918 } 00:12:24.918 } 00:12:24.918 ]' 00:12:24.918 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.178 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.178 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.178 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.178 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.178 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.178 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.178 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.437 18:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:12:26.004 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.004 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:26.004 18:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.004 18:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.263 18:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.263 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.263 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:26.263 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.521 18:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.831 00:12:26.831 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.831 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.831 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.090 { 00:12:27.090 "cntlid": 119, 00:12:27.090 "qid": 0, 00:12:27.090 "state": "enabled", 00:12:27.090 "listen_address": { 00:12:27.090 "trtype": "TCP", 00:12:27.090 "adrfam": "IPv4", 00:12:27.090 "traddr": "10.0.0.2", 00:12:27.090 "trsvcid": "4420" 00:12:27.090 }, 00:12:27.090 "peer_address": { 00:12:27.090 "trtype": "TCP", 00:12:27.090 "adrfam": "IPv4", 00:12:27.090 "traddr": "10.0.0.1", 00:12:27.090 "trsvcid": "46170" 00:12:27.090 }, 00:12:27.090 "auth": { 00:12:27.090 "state": "completed", 00:12:27.090 "digest": "sha512", 00:12:27.090 "dhgroup": "ffdhe3072" 00:12:27.090 } 00:12:27.090 } 00:12:27.090 ]' 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.090 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.350 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.350 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.350 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.609 18:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:28.175 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.433 18:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.691 00:12:28.950 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.950 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.950 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.950 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.950 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.950 18:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.950 18:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.208 18:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.208 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.208 { 00:12:29.208 "cntlid": 121, 00:12:29.208 "qid": 0, 00:12:29.208 "state": "enabled", 00:12:29.208 "listen_address": { 00:12:29.208 "trtype": "TCP", 00:12:29.208 "adrfam": "IPv4", 00:12:29.208 "traddr": "10.0.0.2", 00:12:29.208 "trsvcid": "4420" 00:12:29.208 }, 00:12:29.208 "peer_address": { 00:12:29.208 "trtype": "TCP", 00:12:29.208 "adrfam": "IPv4", 00:12:29.208 "traddr": "10.0.0.1", 00:12:29.208 "trsvcid": "46194" 00:12:29.208 }, 00:12:29.208 "auth": { 00:12:29.208 "state": "completed", 00:12:29.208 "digest": "sha512", 00:12:29.208 "dhgroup": "ffdhe4096" 00:12:29.208 } 00:12:29.208 } 00:12:29.208 ]' 00:12:29.208 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.208 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.209 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.209 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.209 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.209 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.209 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.209 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.467 18:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:12:30.400 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.400 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.401 18:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.967 00:12:30.967 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.967 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:30.967 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.225 { 00:12:31.225 "cntlid": 123, 00:12:31.225 "qid": 0, 00:12:31.225 "state": "enabled", 00:12:31.225 "listen_address": { 00:12:31.225 "trtype": "TCP", 00:12:31.225 "adrfam": "IPv4", 00:12:31.225 "traddr": "10.0.0.2", 00:12:31.225 "trsvcid": "4420" 00:12:31.225 }, 00:12:31.225 "peer_address": { 00:12:31.225 "trtype": "TCP", 00:12:31.225 "adrfam": "IPv4", 00:12:31.225 "traddr": "10.0.0.1", 00:12:31.225 "trsvcid": "46236" 00:12:31.225 }, 00:12:31.225 "auth": { 00:12:31.225 "state": "completed", 00:12:31.225 "digest": "sha512", 00:12:31.225 "dhgroup": "ffdhe4096" 00:12:31.225 } 00:12:31.225 } 00:12:31.225 ]' 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.225 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.483 18:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:32.087 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.654 18:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.655 18:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.655 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.655 18:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.913 00:12:32.913 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.913 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.913 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.172 { 00:12:33.172 "cntlid": 125, 00:12:33.172 "qid": 0, 00:12:33.172 "state": "enabled", 00:12:33.172 "listen_address": { 00:12:33.172 "trtype": "TCP", 00:12:33.172 "adrfam": "IPv4", 00:12:33.172 "traddr": "10.0.0.2", 00:12:33.172 "trsvcid": "4420" 00:12:33.172 }, 00:12:33.172 "peer_address": { 00:12:33.172 "trtype": "TCP", 00:12:33.172 "adrfam": "IPv4", 00:12:33.172 "traddr": "10.0.0.1", 00:12:33.172 "trsvcid": "50566" 00:12:33.172 }, 00:12:33.172 "auth": { 00:12:33.172 "state": "completed", 00:12:33.172 "digest": "sha512", 00:12:33.172 "dhgroup": "ffdhe4096" 00:12:33.172 } 00:12:33.172 } 00:12:33.172 ]' 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:33.172 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.432 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.432 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.432 18:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.691 18:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:34.258 18:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.517 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:35.133 00:12:35.133 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.133 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.133 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.392 { 00:12:35.392 "cntlid": 127, 00:12:35.392 "qid": 0, 00:12:35.392 "state": "enabled", 00:12:35.392 "listen_address": { 00:12:35.392 "trtype": "TCP", 00:12:35.392 "adrfam": "IPv4", 00:12:35.392 "traddr": "10.0.0.2", 00:12:35.392 "trsvcid": "4420" 00:12:35.392 }, 00:12:35.392 "peer_address": { 00:12:35.392 "trtype": "TCP", 00:12:35.392 "adrfam": "IPv4", 00:12:35.392 "traddr": "10.0.0.1", 00:12:35.392 "trsvcid": "50590" 00:12:35.392 }, 00:12:35.392 "auth": { 00:12:35.392 "state": "completed", 00:12:35.392 "digest": "sha512", 00:12:35.392 "dhgroup": "ffdhe4096" 00:12:35.392 } 00:12:35.392 } 00:12:35.392 ]' 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:35.392 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.651 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.651 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.651 18:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.651 18:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:36.586 18:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.586 18:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.845 18:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.845 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.845 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.104 00:12:37.104 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.104 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.104 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.362 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.362 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.362 18:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.362 18:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.362 18:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.362 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.362 { 00:12:37.362 "cntlid": 129, 00:12:37.362 "qid": 0, 00:12:37.362 "state": "enabled", 00:12:37.362 "listen_address": { 00:12:37.362 "trtype": "TCP", 00:12:37.362 "adrfam": "IPv4", 00:12:37.362 "traddr": "10.0.0.2", 00:12:37.362 "trsvcid": "4420" 00:12:37.362 }, 00:12:37.362 "peer_address": { 00:12:37.362 "trtype": "TCP", 00:12:37.362 "adrfam": "IPv4", 00:12:37.362 "traddr": "10.0.0.1", 00:12:37.362 "trsvcid": "50626" 00:12:37.362 }, 00:12:37.362 "auth": { 00:12:37.362 "state": "completed", 00:12:37.362 "digest": "sha512", 00:12:37.362 "dhgroup": "ffdhe6144" 00:12:37.362 } 00:12:37.362 } 00:12:37.362 ]' 00:12:37.620 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.620 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.620 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.620 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:37.620 18:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.620 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.620 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.620 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.878 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:38.467 18:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.725 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.293 00:12:39.293 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.293 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.293 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.552 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.552 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.552 18:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.552 18:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.552 18:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.552 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.552 { 00:12:39.552 "cntlid": 131, 00:12:39.552 "qid": 0, 00:12:39.552 "state": "enabled", 00:12:39.552 "listen_address": { 00:12:39.552 "trtype": "TCP", 00:12:39.552 "adrfam": "IPv4", 00:12:39.552 "traddr": "10.0.0.2", 00:12:39.552 "trsvcid": "4420" 00:12:39.552 }, 00:12:39.552 "peer_address": { 00:12:39.552 "trtype": "TCP", 00:12:39.552 "adrfam": "IPv4", 00:12:39.552 "traddr": "10.0.0.1", 00:12:39.552 "trsvcid": "50646" 00:12:39.552 }, 00:12:39.552 "auth": { 00:12:39.552 "state": "completed", 00:12:39.552 "digest": "sha512", 00:12:39.552 "dhgroup": "ffdhe6144" 00:12:39.552 } 00:12:39.552 } 00:12:39.552 ]' 00:12:39.552 18:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.552 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.552 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.811 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:39.811 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.811 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.811 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.811 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.071 18:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.004 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.571 00:12:41.571 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.571 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.571 18:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.829 { 00:12:41.829 "cntlid": 133, 00:12:41.829 "qid": 0, 00:12:41.829 "state": "enabled", 00:12:41.829 "listen_address": { 00:12:41.829 "trtype": "TCP", 00:12:41.829 "adrfam": "IPv4", 00:12:41.829 "traddr": "10.0.0.2", 00:12:41.829 "trsvcid": "4420" 00:12:41.829 }, 00:12:41.829 "peer_address": { 00:12:41.829 "trtype": "TCP", 00:12:41.829 "adrfam": "IPv4", 00:12:41.829 "traddr": "10.0.0.1", 00:12:41.829 "trsvcid": "50668" 00:12:41.829 }, 00:12:41.829 "auth": { 00:12:41.829 "state": "completed", 00:12:41.829 "digest": "sha512", 00:12:41.829 "dhgroup": "ffdhe6144" 00:12:41.829 } 00:12:41.829 } 00:12:41.829 ]' 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.829 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.396 18:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:42.962 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.220 18:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.785 00:12:43.785 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.785 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.785 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.785 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.785 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.785 18:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.785 18:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:44.043 { 00:12:44.043 "cntlid": 135, 00:12:44.043 "qid": 0, 00:12:44.043 "state": "enabled", 00:12:44.043 "listen_address": { 00:12:44.043 "trtype": "TCP", 00:12:44.043 "adrfam": "IPv4", 00:12:44.043 "traddr": "10.0.0.2", 00:12:44.043 "trsvcid": "4420" 00:12:44.043 }, 00:12:44.043 "peer_address": { 00:12:44.043 "trtype": "TCP", 00:12:44.043 "adrfam": "IPv4", 00:12:44.043 "traddr": "10.0.0.1", 00:12:44.043 "trsvcid": "55406" 00:12:44.043 }, 00:12:44.043 "auth": { 00:12:44.043 "state": "completed", 00:12:44.043 "digest": "sha512", 00:12:44.043 "dhgroup": "ffdhe6144" 00:12:44.043 } 00:12:44.043 } 00:12:44.043 ]' 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.043 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.300 18:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:44.866 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.125 18:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.693 00:12:45.693 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.693 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.693 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:46.258 { 00:12:46.258 "cntlid": 137, 00:12:46.258 "qid": 0, 00:12:46.258 "state": "enabled", 00:12:46.258 "listen_address": { 00:12:46.258 "trtype": "TCP", 00:12:46.258 "adrfam": "IPv4", 00:12:46.258 "traddr": "10.0.0.2", 00:12:46.258 "trsvcid": "4420" 00:12:46.258 }, 00:12:46.258 "peer_address": { 00:12:46.258 "trtype": "TCP", 00:12:46.258 "adrfam": "IPv4", 00:12:46.258 "traddr": "10.0.0.1", 00:12:46.258 "trsvcid": "55434" 00:12:46.258 }, 00:12:46.258 "auth": { 00:12:46.258 "state": "completed", 00:12:46.258 "digest": "sha512", 00:12:46.258 "dhgroup": "ffdhe8192" 00:12:46.258 } 00:12:46.258 } 00:12:46.258 ]' 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.258 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.516 18:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:47.452 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.711 18:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.711 18:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.711 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.711 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:48.277 00:12:48.277 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:48.277 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.277 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:48.535 { 00:12:48.535 "cntlid": 139, 00:12:48.535 "qid": 0, 00:12:48.535 "state": "enabled", 00:12:48.535 "listen_address": { 00:12:48.535 "trtype": "TCP", 00:12:48.535 "adrfam": "IPv4", 00:12:48.535 "traddr": "10.0.0.2", 00:12:48.535 "trsvcid": "4420" 00:12:48.535 }, 00:12:48.535 "peer_address": { 00:12:48.535 "trtype": "TCP", 00:12:48.535 "adrfam": "IPv4", 00:12:48.535 "traddr": "10.0.0.1", 00:12:48.535 "trsvcid": "55464" 00:12:48.535 }, 00:12:48.535 "auth": { 00:12:48.535 "state": "completed", 00:12:48.535 "digest": "sha512", 00:12:48.535 "dhgroup": "ffdhe8192" 00:12:48.535 } 00:12:48.535 } 00:12:48.535 ]' 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:48.535 18:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:48.535 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:48.793 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.793 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:48.793 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.793 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.793 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.051 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:01:NTZmYmMzMDhmOTMyZTM5YWZiZWMxZGEyY2ViMGU0M2FQImJb: --dhchap-ctrl-secret DHHC-1:02:MWIzYjcxNDY0NjlhYjcyNDU3ZTRhODQzM2U2NmM3MzBhNzVhYjM2NTdhNTM1ZDc4fi46OA==: 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:49.617 18:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.876 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.443 00:12:50.443 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:50.443 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.443 18:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:51.010 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:51.011 { 00:12:51.011 "cntlid": 141, 00:12:51.011 "qid": 0, 00:12:51.011 "state": "enabled", 00:12:51.011 "listen_address": { 00:12:51.011 "trtype": "TCP", 00:12:51.011 "adrfam": "IPv4", 00:12:51.011 "traddr": "10.0.0.2", 00:12:51.011 "trsvcid": "4420" 00:12:51.011 }, 00:12:51.011 "peer_address": { 00:12:51.011 "trtype": "TCP", 00:12:51.011 "adrfam": "IPv4", 00:12:51.011 "traddr": "10.0.0.1", 00:12:51.011 "trsvcid": "55490" 00:12:51.011 }, 00:12:51.011 "auth": { 00:12:51.011 "state": "completed", 00:12:51.011 "digest": "sha512", 00:12:51.011 "dhgroup": "ffdhe8192" 00:12:51.011 } 00:12:51.011 } 00:12:51.011 ]' 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.011 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.269 18:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:02:MGEyMTBjYmY0OTVjMmZiMzUwMDMyMmI4NjVhMWM3YTllZDA2ODdiZGNiM2VmOGY43lHr8w==: --dhchap-ctrl-secret DHHC-1:01:NTU3MzlkZDZjYjRkNjJhMDJlYTkzYjVmYTA1YTgyY2PQeX73: 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:52.204 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:52.462 18:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:53.029 00:12:53.029 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.029 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.029 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.354 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.354 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.354 18:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.354 18:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.354 18:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.354 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:53.354 { 00:12:53.354 "cntlid": 143, 00:12:53.354 "qid": 0, 00:12:53.354 "state": "enabled", 00:12:53.354 "listen_address": { 00:12:53.354 "trtype": "TCP", 00:12:53.354 "adrfam": "IPv4", 00:12:53.354 "traddr": "10.0.0.2", 00:12:53.354 "trsvcid": "4420" 00:12:53.354 }, 00:12:53.354 "peer_address": { 00:12:53.354 "trtype": "TCP", 00:12:53.354 "adrfam": "IPv4", 00:12:53.354 "traddr": "10.0.0.1", 00:12:53.354 "trsvcid": "45264" 00:12:53.354 }, 00:12:53.354 "auth": { 00:12:53.354 "state": "completed", 00:12:53.354 "digest": "sha512", 00:12:53.354 "dhgroup": "ffdhe8192" 00:12:53.354 } 00:12:53.354 } 00:12:53.354 ]' 00:12:53.354 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:53.639 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.639 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:53.639 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.639 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:53.639 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.639 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.639 18:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.897 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:54.463 18:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.721 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:55.654 00:12:55.654 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:55.654 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:55.654 18:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.654 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.654 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.654 18:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.654 18:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.654 18:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.654 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:55.654 { 00:12:55.654 "cntlid": 145, 00:12:55.654 "qid": 0, 00:12:55.654 "state": "enabled", 00:12:55.654 "listen_address": { 00:12:55.654 "trtype": "TCP", 00:12:55.654 "adrfam": "IPv4", 00:12:55.654 "traddr": "10.0.0.2", 00:12:55.654 "trsvcid": "4420" 00:12:55.654 }, 00:12:55.654 "peer_address": { 00:12:55.654 "trtype": "TCP", 00:12:55.654 "adrfam": "IPv4", 00:12:55.654 "traddr": "10.0.0.1", 00:12:55.654 "trsvcid": "45278" 00:12:55.654 }, 00:12:55.654 "auth": { 00:12:55.654 "state": "completed", 00:12:55.654 "digest": "sha512", 00:12:55.654 "dhgroup": "ffdhe8192" 00:12:55.654 } 00:12:55.654 } 00:12:55.654 ]' 00:12:55.654 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:55.912 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.912 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:55.912 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:55.912 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:55.912 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.912 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.912 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.170 18:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:00:YWUyOGIyYTAyOGIwM2I5ZTI1MzQ5ZTJiZDlkYWY3ODY2NTg0ZDA0YzVmYjhhZGM1LcrlEg==: --dhchap-ctrl-secret DHHC-1:03:MmU1OWM5NDFmYzk2NjFkZDc2ZDAzNTZkMDIwNDFlNzBkMzRjNDEyN2UxN2IyNzBlNzBkZmM0OGM0MWNhYmJiMLFwIfw=: 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.104 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:12:57.362 request: 00:12:57.362 { 00:12:57.362 "name": "nvme0", 00:12:57.363 "trtype": "tcp", 00:12:57.363 "traddr": "10.0.0.2", 00:12:57.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add", 00:12:57.363 "adrfam": "ipv4", 00:12:57.363 "trsvcid": "4420", 00:12:57.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:57.363 "dhchap_key": "key2", 00:12:57.363 "method": "bdev_nvme_attach_controller", 00:12:57.363 "req_id": 1 00:12:57.363 } 00:12:57.363 Got JSON-RPC error response 00:12:57.363 response: 00:12:57.363 { 00:12:57.363 "code": -32602, 00:12:57.363 "message": "Invalid parameters" 00:12:57.363 } 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:57.621 18:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:58.187 request: 00:12:58.187 { 00:12:58.187 "name": "nvme0", 00:12:58.187 "trtype": "tcp", 00:12:58.187 "traddr": "10.0.0.2", 00:12:58.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add", 00:12:58.187 "adrfam": "ipv4", 00:12:58.187 "trsvcid": "4420", 00:12:58.187 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:58.187 "dhchap_key": "key1", 00:12:58.187 "dhchap_ctrlr_key": "ckey2", 00:12:58.187 "method": "bdev_nvme_attach_controller", 00:12:58.187 "req_id": 1 00:12:58.187 } 00:12:58.187 Got JSON-RPC error response 00:12:58.187 response: 00:12:58.187 { 00:12:58.187 "code": -32602, 00:12:58.187 "message": "Invalid parameters" 00:12:58.187 } 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key1 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.187 18:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.753 request: 00:12:58.753 { 00:12:58.753 "name": "nvme0", 00:12:58.753 "trtype": "tcp", 00:12:58.753 "traddr": "10.0.0.2", 00:12:58.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add", 00:12:58.753 "adrfam": "ipv4", 00:12:58.753 "trsvcid": "4420", 00:12:58.753 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:58.753 "dhchap_key": "key1", 00:12:58.753 "dhchap_ctrlr_key": "ckey1", 00:12:58.753 "method": "bdev_nvme_attach_controller", 00:12:58.753 "req_id": 1 00:12:58.753 } 00:12:58.753 Got JSON-RPC error response 00:12:58.753 response: 00:12:58.753 { 00:12:58.753 "code": -32602, 00:12:58.753 "message": "Invalid parameters" 00:12:58.753 } 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69296 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 69296 ']' 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 69296 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.753 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69296 00:12:58.754 killing process with pid 69296 00:12:58.754 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.754 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.754 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69296' 00:12:58.754 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 69296 00:12:58.754 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 69296 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72368 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72368 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 72368 ']' 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.010 18:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72368 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 72368 ']' 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:00.381 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.638 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:00.638 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:13:00.638 18:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:00.638 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.638 18:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.638 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:01.205 00:13:01.205 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:01.205 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:01.205 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.772 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.772 18:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.772 18:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:01.773 18:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:01.773 { 00:13:01.773 "cntlid": 1, 00:13:01.773 "qid": 0, 00:13:01.773 "state": "enabled", 00:13:01.773 "listen_address": { 00:13:01.773 "trtype": "TCP", 00:13:01.773 "adrfam": "IPv4", 00:13:01.773 "traddr": "10.0.0.2", 00:13:01.773 "trsvcid": "4420" 00:13:01.773 }, 00:13:01.773 "peer_address": { 00:13:01.773 "trtype": "TCP", 00:13:01.773 "adrfam": "IPv4", 00:13:01.773 "traddr": "10.0.0.1", 00:13:01.773 "trsvcid": "45336" 00:13:01.773 }, 00:13:01.773 "auth": { 00:13:01.773 "state": "completed", 00:13:01.773 "digest": "sha512", 00:13:01.773 "dhgroup": "ffdhe8192" 00:13:01.773 } 00:13:01.773 } 00:13:01.773 ]' 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.773 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.030 18:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid 8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-secret DHHC-1:03:NTVkYjIyMGNkNGY1NmU3YmQ2MGIyY2RhYWQ4NDY3ZTEyMzA5NGVmNTI0NzdhNDE4ODhhMjdjNjc3YzdiY2I4NcvWsTk=: 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --dhchap-key key3 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:02.963 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.221 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.478 request: 00:13:03.478 { 00:13:03.478 "name": "nvme0", 00:13:03.478 "trtype": "tcp", 00:13:03.478 "traddr": "10.0.0.2", 00:13:03.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add", 00:13:03.478 "adrfam": "ipv4", 00:13:03.478 "trsvcid": "4420", 00:13:03.478 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:03.478 "dhchap_key": "key3", 00:13:03.478 "method": "bdev_nvme_attach_controller", 00:13:03.478 "req_id": 1 00:13:03.478 } 00:13:03.478 Got JSON-RPC error response 00:13:03.478 response: 00:13:03.478 { 00:13:03.478 "code": -32602, 00:13:03.478 "message": "Invalid parameters" 00:13:03.478 } 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:03.478 18:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.736 18:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:03.994 request: 00:13:03.994 { 00:13:03.994 "name": "nvme0", 00:13:03.994 "trtype": "tcp", 00:13:03.994 "traddr": "10.0.0.2", 00:13:03.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add", 00:13:03.994 "adrfam": "ipv4", 00:13:03.994 "trsvcid": "4420", 00:13:03.994 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:03.994 "dhchap_key": "key3", 00:13:03.994 "method": "bdev_nvme_attach_controller", 00:13:03.994 "req_id": 1 00:13:03.994 } 00:13:03.994 Got JSON-RPC error response 00:13:03.994 response: 00:13:03.994 { 00:13:03.994 "code": -32602, 00:13:03.994 "message": "Invalid parameters" 00:13:03.994 } 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@173 -- # trap - SIGINT SIGTERM EXIT 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@174 -- # cleanup 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69328 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 69328 ']' 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 69328 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 69328 00:13:03.994 killing process with pid 69328 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 69328' 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 69328 00:13:03.994 18:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 69328 00:13:04.560 18:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:04.560 18:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:04.560 18:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:04.560 18:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.560 18:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:04.560 18:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.560 18:34:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.560 rmmod nvme_tcp 00:13:04.560 rmmod nvme_fabrics 00:13:04.560 rmmod nvme_keyring 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72368 ']' 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72368 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 72368 ']' 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 72368 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.560 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72368 00:13:04.818 killing process with pid 72368 00:13:04.818 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.818 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.818 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72368' 00:13:04.818 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 72368 00:13:04.818 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 72368 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rDU /tmp/spdk.key-sha256.UhS /tmp/spdk.key-sha384.baA /tmp/spdk.key-sha512.SSN /tmp/spdk.key-sha512.vxu /tmp/spdk.key-sha384.aDJ /tmp/spdk.key-sha256.Z27 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:05.100 ************************************ 00:13:05.100 END TEST nvmf_auth_target 00:13:05.100 00:13:05.100 real 2m52.553s 00:13:05.100 user 6m51.610s 00:13:05.100 sys 0m27.494s 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:05.100 18:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.100 ************************************ 00:13:05.100 18:34:18 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:05.100 18:34:18 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:05.100 18:34:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:05.100 18:34:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:05.100 18:34:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:05.100 ************************************ 00:13:05.100 START TEST nvmf_bdevio_no_huge 00:13:05.100 ************************************ 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:05.100 * Looking for test storage... 00:13:05.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:05.100 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:05.100 Cannot find device "nvmf_tgt_br" 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:05.359 Cannot find device "nvmf_tgt_br2" 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:05.359 Cannot find device "nvmf_tgt_br" 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:05.359 Cannot find device "nvmf_tgt_br2" 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:05.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:05.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:05.359 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:05.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:05.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:13:05.617 00:13:05.617 --- 10.0.0.2 ping statistics --- 00:13:05.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.617 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:05.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:05.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:13:05.617 00:13:05.617 --- 10.0.0.3 ping statistics --- 00:13:05.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.617 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:05.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:05.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:05.617 00:13:05.617 --- 10.0.0.1 ping statistics --- 00:13:05.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:05.617 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72669 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72669 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 72669 ']' 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.617 18:34:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:05.617 [2024-05-16 18:34:18.963937] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:05.617 [2024-05-16 18:34:18.964053] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:05.617 [2024-05-16 18:34:19.111489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.875 [2024-05-16 18:34:19.313358] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.875 [2024-05-16 18:34:19.313452] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.875 [2024-05-16 18:34:19.313472] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.875 [2024-05-16 18:34:19.313485] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.875 [2024-05-16 18:34:19.313497] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.875 [2024-05-16 18:34:19.313617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:05.875 [2024-05-16 18:34:19.314903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:05.875 [2024-05-16 18:34:19.315025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:05.875 [2024-05-16 18:34:19.315042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:05.875 [2024-05-16 18:34:19.321401] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.809 [2024-05-16 18:34:20.043054] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.809 Malloc0 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.809 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:06.809 [2024-05-16 18:34:20.084712] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:06.810 [2024-05-16 18:34:20.085236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:06.810 { 00:13:06.810 "params": { 00:13:06.810 "name": "Nvme$subsystem", 00:13:06.810 "trtype": "$TEST_TRANSPORT", 00:13:06.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:06.810 "adrfam": "ipv4", 00:13:06.810 "trsvcid": "$NVMF_PORT", 00:13:06.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:06.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:06.810 "hdgst": ${hdgst:-false}, 00:13:06.810 "ddgst": ${ddgst:-false} 00:13:06.810 }, 00:13:06.810 "method": "bdev_nvme_attach_controller" 00:13:06.810 } 00:13:06.810 EOF 00:13:06.810 )") 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:06.810 18:34:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:06.810 "params": { 00:13:06.810 "name": "Nvme1", 00:13:06.810 "trtype": "tcp", 00:13:06.810 "traddr": "10.0.0.2", 00:13:06.810 "adrfam": "ipv4", 00:13:06.810 "trsvcid": "4420", 00:13:06.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:06.810 "hdgst": false, 00:13:06.810 "ddgst": false 00:13:06.810 }, 00:13:06.810 "method": "bdev_nvme_attach_controller" 00:13:06.810 }' 00:13:06.810 [2024-05-16 18:34:20.154291] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:06.810 [2024-05-16 18:34:20.154457] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72705 ] 00:13:07.067 [2024-05-16 18:34:20.326070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:07.067 [2024-05-16 18:34:20.507480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:07.067 [2024-05-16 18:34:20.507568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:07.067 [2024-05-16 18:34:20.507581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.067 [2024-05-16 18:34:20.529730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:07.325 I/O targets: 00:13:07.325 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:07.325 00:13:07.325 00:13:07.325 CUnit - A unit testing framework for C - Version 2.1-3 00:13:07.325 http://cunit.sourceforge.net/ 00:13:07.325 00:13:07.325 00:13:07.325 Suite: bdevio tests on: Nvme1n1 00:13:07.325 Test: blockdev write read block ...passed 00:13:07.325 Test: blockdev write zeroes read block ...passed 00:13:07.325 Test: blockdev write zeroes read no split ...passed 00:13:07.325 Test: blockdev write zeroes read split ...passed 00:13:07.325 Test: blockdev write zeroes read split partial ...passed 00:13:07.325 Test: blockdev reset ...[2024-05-16 18:34:20.761996] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:07.325 [2024-05-16 18:34:20.762189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ae640 (9): Bad file descriptor 00:13:07.325 [2024-05-16 18:34:20.775361] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:07.325 passed 00:13:07.325 Test: blockdev write read 8 blocks ...passed 00:13:07.326 Test: blockdev write read size > 128k ...passed 00:13:07.326 Test: blockdev write read invalid size ...passed 00:13:07.326 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.326 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.326 Test: blockdev write read max offset ...passed 00:13:07.326 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.326 Test: blockdev writev readv 8 blocks ...passed 00:13:07.326 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.326 Test: blockdev writev readv block ...passed 00:13:07.326 Test: blockdev writev readv size > 128k ...passed 00:13:07.326 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.326 Test: blockdev comparev and writev ...[2024-05-16 18:34:20.784981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.785036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.785069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.785090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.785483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.785524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.785555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.785573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.786026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.786066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.786096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.786114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.786537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.786577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.786607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.326 [2024-05-16 18:34:20.786624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:07.326 passed 00:13:07.326 Test: blockdev nvme passthru rw ...passed 00:13:07.326 Test: blockdev nvme passthru vendor specific ...[2024-05-16 18:34:20.787725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.326 [2024-05-16 18:34:20.787768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.787970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.326 [2024-05-16 18:34:20.788007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.788177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.326 [2024-05-16 18:34:20.788217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:07.326 [2024-05-16 18:34:20.788386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.326 [2024-05-16 18:34:20.788422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:07.326 passed 00:13:07.326 Test: blockdev nvme admin passthru ...passed 00:13:07.326 Test: blockdev copy ...passed 00:13:07.326 00:13:07.326 Run Summary: Type Total Ran Passed Failed Inactive 00:13:07.326 suites 1 1 n/a 0 0 00:13:07.326 tests 23 23 23 0 0 00:13:07.326 asserts 152 152 152 0 n/a 00:13:07.326 00:13:07.326 Elapsed time = 0.174 seconds 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.891 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:08.149 rmmod nvme_tcp 00:13:08.149 rmmod nvme_fabrics 00:13:08.149 rmmod nvme_keyring 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72669 ']' 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72669 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 72669 ']' 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 72669 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72669 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:13:08.149 killing process with pid 72669 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72669' 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 72669 00:13:08.149 18:34:21 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 72669 00:13:08.149 [2024-05-16 18:34:21.495664] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:08.716 00:13:08.716 real 0m3.620s 00:13:08.716 user 0m12.200s 00:13:08.716 sys 0m1.559s 00:13:08.716 ************************************ 00:13:08.716 END TEST nvmf_bdevio_no_huge 00:13:08.716 ************************************ 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.716 18:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:08.716 18:34:22 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:08.716 18:34:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:08.716 18:34:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.716 18:34:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.716 ************************************ 00:13:08.716 START TEST nvmf_tls 00:13:08.716 ************************************ 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:08.716 * Looking for test storage... 00:13:08.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.716 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:08.974 Cannot find device "nvmf_tgt_br" 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:08.974 Cannot find device "nvmf_tgt_br2" 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:08.974 Cannot find device "nvmf_tgt_br" 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:08.974 Cannot find device "nvmf_tgt_br2" 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:08.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:08.974 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:08.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:08.975 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:09.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:13:09.233 00:13:09.233 --- 10.0.0.2 ping statistics --- 00:13:09.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.233 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:09.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:09.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:13:09.233 00:13:09.233 --- 10.0.0.3 ping statistics --- 00:13:09.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.233 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:09.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:13:09.233 00:13:09.233 --- 10.0.0.1 ping statistics --- 00:13:09.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.233 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=72885 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 72885 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 72885 ']' 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:09.233 18:34:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:09.233 [2024-05-16 18:34:22.642667] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:09.233 [2024-05-16 18:34:22.642798] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.490 [2024-05-16 18:34:22.790848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.490 [2024-05-16 18:34:22.956666] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.490 [2024-05-16 18:34:22.956761] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.490 [2024-05-16 18:34:22.956774] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.490 [2024-05-16 18:34:22.956783] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.490 [2024-05-16 18:34:22.956791] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.490 [2024-05-16 18:34:22.956845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:10.423 18:34:23 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:10.681 true 00:13:10.681 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:10.681 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:10.939 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:10.939 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:10.939 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:11.249 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:11.249 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:11.522 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:11.522 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:11.522 18:34:24 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:11.779 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:11.779 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:12.037 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:12.037 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:12.037 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:12.037 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:12.602 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:12.602 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:12.602 18:34:25 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:12.860 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:12.860 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:13.117 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:13.117 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:13.117 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:13.375 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:13.375 18:34:26 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:13.633 18:34:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.aiTeTGQLfS 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.itycBfiSzK 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.aiTeTGQLfS 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.itycBfiSzK 00:13:13.891 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:14.149 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:14.406 [2024-05-16 18:34:27.693950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:14.406 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.aiTeTGQLfS 00:13:14.406 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.aiTeTGQLfS 00:13:14.406 18:34:27 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:14.663 [2024-05-16 18:34:28.012982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.663 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:14.932 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:15.189 [2024-05-16 18:34:28.521029] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:15.189 [2024-05-16 18:34:28.521149] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:15.189 [2024-05-16 18:34:28.521358] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.189 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:15.447 malloc0 00:13:15.447 18:34:28 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:15.704 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aiTeTGQLfS 00:13:15.961 [2024-05-16 18:34:29.444577] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:16.219 18:34:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.aiTeTGQLfS 00:13:26.184 Initializing NVMe Controllers 00:13:26.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:26.184 Initialization complete. Launching workers. 00:13:26.184 ======================================================== 00:13:26.184 Latency(us) 00:13:26.184 Device Information : IOPS MiB/s Average min max 00:13:26.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8859.83 34.61 7225.47 1127.02 49019.81 00:13:26.184 ======================================================== 00:13:26.184 Total : 8859.83 34.61 7225.47 1127.02 49019.81 00:13:26.184 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiTeTGQLfS 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aiTeTGQLfS' 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73128 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73128 /var/tmp/bdevperf.sock 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73128 ']' 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:26.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:26.184 18:34:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:26.441 [2024-05-16 18:34:39.716992] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:26.441 [2024-05-16 18:34:39.717149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73128 ] 00:13:26.441 [2024-05-16 18:34:39.860115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.698 [2024-05-16 18:34:40.054518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.698 [2024-05-16 18:34:40.130152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:27.261 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:27.261 18:34:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:27.261 18:34:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aiTeTGQLfS 00:13:27.518 [2024-05-16 18:34:40.938051] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:27.518 [2024-05-16 18:34:40.938232] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:27.518 TLSTESTn1 00:13:27.775 18:34:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:27.775 Running I/O for 10 seconds... 00:13:37.737 00:13:37.737 Latency(us) 00:13:37.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.737 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:37.737 Verification LBA range: start 0x0 length 0x2000 00:13:37.737 TLSTESTn1 : 10.02 3564.04 13.92 0.00 0.00 35846.81 6464.23 47662.55 00:13:37.737 =================================================================================================================== 00:13:37.737 Total : 3564.04 13.92 0.00 0.00 35846.81 6464.23 47662.55 00:13:37.737 0 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73128 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73128 ']' 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73128 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73128 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:37.737 killing process with pid 73128 00:13:37.737 Received shutdown signal, test time was about 10.000000 seconds 00:13:37.737 00:13:37.737 Latency(us) 00:13:37.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.737 =================================================================================================================== 00:13:37.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73128' 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73128 00:13:37.737 [2024-05-16 18:34:51.211229] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:37.737 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73128 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itycBfiSzK 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itycBfiSzK 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:38.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.itycBfiSzK 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.itycBfiSzK' 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73262 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73262 /var/tmp/bdevperf.sock 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73262 ']' 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.359 18:34:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.359 [2024-05-16 18:34:51.571669] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:38.359 [2024-05-16 18:34:51.571774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73262 ] 00:13:38.359 [2024-05-16 18:34:51.703466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.631 [2024-05-16 18:34:51.848601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.631 [2024-05-16 18:34:51.918323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:39.197 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.197 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:39.197 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.itycBfiSzK 00:13:39.456 [2024-05-16 18:34:52.743439] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:39.456 [2024-05-16 18:34:52.743613] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:39.456 [2024-05-16 18:34:52.749022] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:39.456 [2024-05-16 18:34:52.749320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x782f80 (107): Transport endpoint is not connected 00:13:39.456 [2024-05-16 18:34:52.750305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x782f80 (9): Bad file descriptor 00:13:39.456 [2024-05-16 18:34:52.751302] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:39.456 [2024-05-16 18:34:52.751327] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:39.456 [2024-05-16 18:34:52.751342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:39.456 request: 00:13:39.456 { 00:13:39.456 "name": "TLSTEST", 00:13:39.456 "trtype": "tcp", 00:13:39.456 "traddr": "10.0.0.2", 00:13:39.456 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.456 "adrfam": "ipv4", 00:13:39.456 "trsvcid": "4420", 00:13:39.456 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.456 "psk": "/tmp/tmp.itycBfiSzK", 00:13:39.456 "method": "bdev_nvme_attach_controller", 00:13:39.456 "req_id": 1 00:13:39.456 } 00:13:39.456 Got JSON-RPC error response 00:13:39.456 response: 00:13:39.456 { 00:13:39.456 "code": -32602, 00:13:39.456 "message": "Invalid parameters" 00:13:39.456 } 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73262 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73262 ']' 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73262 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73262 00:13:39.456 killing process with pid 73262 00:13:39.456 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.456 00:13:39.456 Latency(us) 00:13:39.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.456 =================================================================================================================== 00:13:39.456 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73262' 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73262 00:13:39.456 [2024-05-16 18:34:52.802265] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:39.456 18:34:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73262 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aiTeTGQLfS 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aiTeTGQLfS 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.aiTeTGQLfS 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aiTeTGQLfS' 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73285 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73285 /var/tmp/bdevperf.sock 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73285 ']' 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:39.714 18:34:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.714 [2024-05-16 18:34:53.146285] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:39.714 [2024-05-16 18:34:53.146388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73285 ] 00:13:39.972 [2024-05-16 18:34:53.278871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.972 [2024-05-16 18:34:53.423279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.229 [2024-05-16 18:34:53.493172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.794 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:40.794 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:40.794 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.aiTeTGQLfS 00:13:41.053 [2024-05-16 18:34:54.418133] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:41.053 [2024-05-16 18:34:54.418287] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:41.053 [2024-05-16 18:34:54.429100] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:41.053 [2024-05-16 18:34:54.429145] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:41.053 [2024-05-16 18:34:54.429207] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:41.053 [2024-05-16 18:34:54.430018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3ef80 (107): Transport endpoint is not connected 00:13:41.053 [2024-05-16 18:34:54.431006] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3ef80 (9): Bad file descriptor 00:13:41.053 [2024-05-16 18:34:54.432002] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:41.053 [2024-05-16 18:34:54.432026] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:41.053 [2024-05-16 18:34:54.432041] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:41.053 request: 00:13:41.053 { 00:13:41.053 "name": "TLSTEST", 00:13:41.053 "trtype": "tcp", 00:13:41.053 "traddr": "10.0.0.2", 00:13:41.053 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:41.053 "adrfam": "ipv4", 00:13:41.053 "trsvcid": "4420", 00:13:41.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:41.053 "psk": "/tmp/tmp.aiTeTGQLfS", 00:13:41.053 "method": "bdev_nvme_attach_controller", 00:13:41.053 "req_id": 1 00:13:41.053 } 00:13:41.053 Got JSON-RPC error response 00:13:41.053 response: 00:13:41.053 { 00:13:41.053 "code": -32602, 00:13:41.053 "message": "Invalid parameters" 00:13:41.053 } 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73285 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73285 ']' 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73285 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73285 00:13:41.053 killing process with pid 73285 00:13:41.053 Received shutdown signal, test time was about 10.000000 seconds 00:13:41.053 00:13:41.053 Latency(us) 00:13:41.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.053 =================================================================================================================== 00:13:41.053 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73285' 00:13:41.053 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73285 00:13:41.053 [2024-05-16 18:34:54.484725] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73285 00:13:41.053 scheduled for removal in v24.09 hit 1 times 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiTeTGQLfS 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiTeTGQLfS 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.aiTeTGQLfS 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aiTeTGQLfS' 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73318 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:41.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73318 /var/tmp/bdevperf.sock 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73318 ']' 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:41.311 18:34:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:41.570 [2024-05-16 18:34:54.825789] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:41.570 [2024-05-16 18:34:54.825944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73318 ] 00:13:41.570 [2024-05-16 18:34:54.962572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.828 [2024-05-16 18:34:55.106920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.828 [2024-05-16 18:34:55.175746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:42.396 18:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:42.396 18:34:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:42.396 18:34:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aiTeTGQLfS 00:13:42.654 [2024-05-16 18:34:56.104642] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:42.654 [2024-05-16 18:34:56.104790] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:42.654 [2024-05-16 18:34:56.110292] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:42.654 [2024-05-16 18:34:56.110335] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:42.654 [2024-05-16 18:34:56.110388] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:42.654 [2024-05-16 18:34:56.110987] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57f80 (107): Transport endpoint is not connected 00:13:42.654 [2024-05-16 18:34:56.111973] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f57f80 (9): Bad file descriptor 00:13:42.654 [2024-05-16 18:34:56.112969] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:42.654 [2024-05-16 18:34:56.113019] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:42.654 [2024-05-16 18:34:56.113038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:42.654 request: 00:13:42.654 { 00:13:42.654 "name": "TLSTEST", 00:13:42.654 "trtype": "tcp", 00:13:42.654 "traddr": "10.0.0.2", 00:13:42.654 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:42.654 "adrfam": "ipv4", 00:13:42.654 "trsvcid": "4420", 00:13:42.654 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:42.654 "psk": "/tmp/tmp.aiTeTGQLfS", 00:13:42.654 "method": "bdev_nvme_attach_controller", 00:13:42.654 "req_id": 1 00:13:42.654 } 00:13:42.654 Got JSON-RPC error response 00:13:42.654 response: 00:13:42.654 { 00:13:42.654 "code": -32602, 00:13:42.654 "message": "Invalid parameters" 00:13:42.654 } 00:13:42.654 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73318 00:13:42.654 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73318 ']' 00:13:42.654 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73318 00:13:42.654 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:42.654 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:42.654 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73318 00:13:42.912 killing process with pid 73318 00:13:42.912 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.912 00:13:42.912 Latency(us) 00:13:42.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.912 =================================================================================================================== 00:13:42.913 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:42.913 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:42.913 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:42.913 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73318' 00:13:42.913 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73318 00:13:42.913 [2024-05-16 18:34:56.164170] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:42.913 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73318 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73340 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73340 /var/tmp/bdevperf.sock 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73340 ']' 00:13:43.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:43.171 18:34:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:43.171 [2024-05-16 18:34:56.514968] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:43.171 [2024-05-16 18:34:56.515068] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73340 ] 00:13:43.171 [2024-05-16 18:34:56.648620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.429 [2024-05-16 18:34:56.794836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.429 [2024-05-16 18:34:56.865556] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:44.362 [2024-05-16 18:34:57.739693] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:44.362 [2024-05-16 18:34:57.741954] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18536a0 (9): Bad file descriptor 00:13:44.362 [2024-05-16 18:34:57.742949] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:44.362 [2024-05-16 18:34:57.743121] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:44.362 [2024-05-16 18:34:57.743273] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:44.362 request: 00:13:44.362 { 00:13:44.362 "name": "TLSTEST", 00:13:44.362 "trtype": "tcp", 00:13:44.362 "traddr": "10.0.0.2", 00:13:44.362 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:44.362 "adrfam": "ipv4", 00:13:44.362 "trsvcid": "4420", 00:13:44.362 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.362 "method": "bdev_nvme_attach_controller", 00:13:44.362 "req_id": 1 00:13:44.362 } 00:13:44.362 Got JSON-RPC error response 00:13:44.362 response: 00:13:44.362 { 00:13:44.362 "code": -32602, 00:13:44.362 "message": "Invalid parameters" 00:13:44.362 } 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73340 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73340 ']' 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73340 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.362 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73340 00:13:44.362 killing process with pid 73340 00:13:44.362 Received shutdown signal, test time was about 10.000000 seconds 00:13:44.362 00:13:44.362 Latency(us) 00:13:44.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.362 =================================================================================================================== 00:13:44.363 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:44.363 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:44.363 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:44.363 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73340' 00:13:44.363 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73340 00:13:44.363 18:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73340 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 72885 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 72885 ']' 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 72885 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 72885 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:44.621 killing process with pid 72885 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 72885' 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 72885 00:13:44.621 [2024-05-16 18:34:58.113446] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:44.621 [2024-05-16 18:34:58.113492] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:44.621 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 72885 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Z1hB88Ybzi 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Z1hB88Ybzi 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73383 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73383 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73383 ']' 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:45.187 18:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:45.187 [2024-05-16 18:34:58.536294] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:45.187 [2024-05-16 18:34:58.536675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.187 [2024-05-16 18:34:58.675928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.444 [2024-05-16 18:34:58.839095] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.445 [2024-05-16 18:34:58.839719] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.445 [2024-05-16 18:34:58.839748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.445 [2024-05-16 18:34:58.839760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.445 [2024-05-16 18:34:58.839769] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.445 [2024-05-16 18:34:58.839816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.445 [2024-05-16 18:34:58.912998] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:46.011 18:34:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:46.011 18:34:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:46.011 18:34:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.011 18:34:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:46.011 18:34:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:46.269 18:34:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.269 18:34:59 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Z1hB88Ybzi 00:13:46.269 18:34:59 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z1hB88Ybzi 00:13:46.269 18:34:59 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:46.269 [2024-05-16 18:34:59.750169] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.527 18:34:59 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:46.527 18:35:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:46.786 [2024-05-16 18:35:00.258269] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:46.786 [2024-05-16 18:35:00.258450] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:46.786 [2024-05-16 18:35:00.258743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.786 18:35:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:47.352 malloc0 00:13:47.352 18:35:00 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:47.352 18:35:00 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z1hB88Ybzi 00:13:47.610 [2024-05-16 18:35:01.044354] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z1hB88Ybzi 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Z1hB88Ybzi' 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73438 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73438 /var/tmp/bdevperf.sock 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73438 ']' 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:47.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:47.610 18:35:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:47.869 [2024-05-16 18:35:01.112011] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:47.869 [2024-05-16 18:35:01.112349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73438 ] 00:13:47.869 [2024-05-16 18:35:01.246080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.141 [2024-05-16 18:35:01.425119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:48.141 [2024-05-16 18:35:01.495028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:48.721 18:35:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:48.721 18:35:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:13:48.721 18:35:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z1hB88Ybzi 00:13:48.979 [2024-05-16 18:35:02.376558] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:48.979 [2024-05-16 18:35:02.376713] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:48.979 TLSTESTn1 00:13:48.979 18:35:02 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:49.237 Running I/O for 10 seconds... 00:13:59.279 00:13:59.279 Latency(us) 00:13:59.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.279 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:59.279 Verification LBA range: start 0x0 length 0x2000 00:13:59.279 TLSTESTn1 : 10.03 3784.73 14.78 0.00 0.00 33744.88 7060.01 35508.60 00:13:59.279 =================================================================================================================== 00:13:59.279 Total : 3784.73 14.78 0.00 0.00 33744.88 7060.01 35508.60 00:13:59.279 0 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73438 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73438 ']' 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73438 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73438 00:13:59.279 killing process with pid 73438 00:13:59.279 Received shutdown signal, test time was about 10.000000 seconds 00:13:59.279 00:13:59.279 Latency(us) 00:13:59.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.279 =================================================================================================================== 00:13:59.279 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73438' 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73438 00:13:59.279 [2024-05-16 18:35:12.666145] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:59.279 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73438 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Z1hB88Ybzi 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z1hB88Ybzi 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z1hB88Ybzi 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:59.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z1hB88Ybzi 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Z1hB88Ybzi' 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73574 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73574 /var/tmp/bdevperf.sock 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73574 ']' 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:59.537 18:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.794 [2024-05-16 18:35:13.039184] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:13:59.794 [2024-05-16 18:35:13.039306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73574 ] 00:13:59.794 [2024-05-16 18:35:13.179977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.053 [2024-05-16 18:35:13.347609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.053 [2024-05-16 18:35:13.423175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:00.642 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:00.642 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:00.642 18:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z1hB88Ybzi 00:14:00.900 [2024-05-16 18:35:14.320238] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.900 [2024-05-16 18:35:14.320353] bdev_nvme.c:6116:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:00.900 [2024-05-16 18:35:14.320367] bdev_nvme.c:6225:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Z1hB88Ybzi 00:14:00.900 request: 00:14:00.900 { 00:14:00.900 "name": "TLSTEST", 00:14:00.900 "trtype": "tcp", 00:14:00.900 "traddr": "10.0.0.2", 00:14:00.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:00.900 "adrfam": "ipv4", 00:14:00.900 "trsvcid": "4420", 00:14:00.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.900 "psk": "/tmp/tmp.Z1hB88Ybzi", 00:14:00.900 "method": "bdev_nvme_attach_controller", 00:14:00.900 "req_id": 1 00:14:00.900 } 00:14:00.900 Got JSON-RPC error response 00:14:00.900 response: 00:14:00.900 { 00:14:00.900 "code": -1, 00:14:00.900 "message": "Operation not permitted" 00:14:00.900 } 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73574 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73574 ']' 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73574 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73574 00:14:00.900 killing process with pid 73574 00:14:00.900 Received shutdown signal, test time was about 10.000000 seconds 00:14:00.900 00:14:00.900 Latency(us) 00:14:00.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.900 =================================================================================================================== 00:14:00.900 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73574' 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73574 00:14:00.900 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73574 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73383 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73383 ']' 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73383 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73383 00:14:01.159 killing process with pid 73383 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73383' 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73383 00:14:01.159 [2024-05-16 18:35:14.628015] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:01.159 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73383 00:14:01.159 [2024-05-16 18:35:14.628105] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73612 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73612 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73612 ']' 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:01.726 18:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.726 [2024-05-16 18:35:15.026388] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:01.726 [2024-05-16 18:35:15.026750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.726 [2024-05-16 18:35:15.163336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.984 [2024-05-16 18:35:15.310759] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.984 [2024-05-16 18:35:15.310851] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.984 [2024-05-16 18:35:15.310865] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.984 [2024-05-16 18:35:15.310875] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.984 [2024-05-16 18:35:15.310883] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.984 [2024-05-16 18:35:15.310915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.984 [2024-05-16 18:35:15.383185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:02.549 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:02.549 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:02.549 18:35:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.549 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.549 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Z1hB88Ybzi 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Z1hB88Ybzi 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:02.807 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:02.808 18:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Z1hB88Ybzi 00:14:02.808 18:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z1hB88Ybzi 00:14:02.808 18:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:03.065 [2024-05-16 18:35:16.358202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.065 18:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:03.323 18:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:03.581 [2024-05-16 18:35:16.850214] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:03.581 [2024-05-16 18:35:16.850386] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:03.581 [2024-05-16 18:35:16.850630] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.581 18:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:03.839 malloc0 00:14:03.840 18:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:04.098 18:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z1hB88Ybzi 00:14:04.356 [2024-05-16 18:35:17.673108] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:04.356 [2024-05-16 18:35:17.673183] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:04.356 [2024-05-16 18:35:17.673230] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:04.356 request: 00:14:04.356 { 00:14:04.356 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:04.356 "host": "nqn.2016-06.io.spdk:host1", 00:14:04.357 "psk": "/tmp/tmp.Z1hB88Ybzi", 00:14:04.357 "method": "nvmf_subsystem_add_host", 00:14:04.357 "req_id": 1 00:14:04.357 } 00:14:04.357 Got JSON-RPC error response 00:14:04.357 response: 00:14:04.357 { 00:14:04.357 "code": -32603, 00:14:04.357 "message": "Internal error" 00:14:04.357 } 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73612 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73612 ']' 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73612 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73612 00:14:04.357 killing process with pid 73612 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73612' 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73612 00:14:04.357 [2024-05-16 18:35:17.722171] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:04.357 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73612 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Z1hB88Ybzi 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73675 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73675 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73675 ']' 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:04.615 18:35:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.615 [2024-05-16 18:35:18.063661] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:04.615 [2024-05-16 18:35:18.063845] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.878 [2024-05-16 18:35:18.220679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.878 [2024-05-16 18:35:18.338328] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.878 [2024-05-16 18:35:18.338391] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.878 [2024-05-16 18:35:18.338404] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.878 [2024-05-16 18:35:18.338412] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.878 [2024-05-16 18:35:18.338419] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.878 [2024-05-16 18:35:18.338451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.145 [2024-05-16 18:35:18.392438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Z1hB88Ybzi 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z1hB88Ybzi 00:14:05.713 18:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:05.972 [2024-05-16 18:35:19.424771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.972 18:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:06.230 18:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:06.489 [2024-05-16 18:35:19.924832] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:06.489 [2024-05-16 18:35:19.924947] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.489 [2024-05-16 18:35:19.925140] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.489 18:35:19 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:06.747 malloc0 00:14:06.747 18:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:07.005 18:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z1hB88Ybzi 00:14:07.263 [2024-05-16 18:35:20.628445] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:07.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73729 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73729 /var/tmp/bdevperf.sock 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73729 ']' 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:07.263 18:35:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:07.263 [2024-05-16 18:35:20.717313] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:07.263 [2024-05-16 18:35:20.717861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73729 ] 00:14:07.520 [2024-05-16 18:35:20.859740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.779 [2024-05-16 18:35:21.026504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:07.779 [2024-05-16 18:35:21.098295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:08.346 18:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:08.346 18:35:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:08.346 18:35:21 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z1hB88Ybzi 00:14:08.604 [2024-05-16 18:35:22.004759] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:08.604 [2024-05-16 18:35:22.004917] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:08.604 TLSTESTn1 00:14:08.604 18:35:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:09.171 18:35:22 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:09.171 "subsystems": [ 00:14:09.171 { 00:14:09.171 "subsystem": "keyring", 00:14:09.171 "config": [] 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "subsystem": "iobuf", 00:14:09.171 "config": [ 00:14:09.171 { 00:14:09.171 "method": "iobuf_set_options", 00:14:09.171 "params": { 00:14:09.171 "small_pool_count": 8192, 00:14:09.171 "large_pool_count": 1024, 00:14:09.171 "small_bufsize": 8192, 00:14:09.171 "large_bufsize": 135168 00:14:09.171 } 00:14:09.171 } 00:14:09.171 ] 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "subsystem": "sock", 00:14:09.171 "config": [ 00:14:09.171 { 00:14:09.171 "method": "sock_set_default_impl", 00:14:09.171 "params": { 00:14:09.171 "impl_name": "uring" 00:14:09.171 } 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "method": "sock_impl_set_options", 00:14:09.171 "params": { 00:14:09.171 "impl_name": "ssl", 00:14:09.171 "recv_buf_size": 4096, 00:14:09.171 "send_buf_size": 4096, 00:14:09.171 "enable_recv_pipe": true, 00:14:09.171 "enable_quickack": false, 00:14:09.171 "enable_placement_id": 0, 00:14:09.171 "enable_zerocopy_send_server": true, 00:14:09.171 "enable_zerocopy_send_client": false, 00:14:09.171 "zerocopy_threshold": 0, 00:14:09.171 "tls_version": 0, 00:14:09.171 "enable_ktls": false 00:14:09.171 } 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "method": "sock_impl_set_options", 00:14:09.171 "params": { 00:14:09.171 "impl_name": "posix", 00:14:09.171 "recv_buf_size": 2097152, 00:14:09.171 "send_buf_size": 2097152, 00:14:09.171 "enable_recv_pipe": true, 00:14:09.171 "enable_quickack": false, 00:14:09.171 "enable_placement_id": 0, 00:14:09.171 "enable_zerocopy_send_server": true, 00:14:09.171 "enable_zerocopy_send_client": false, 00:14:09.171 "zerocopy_threshold": 0, 00:14:09.171 "tls_version": 0, 00:14:09.171 "enable_ktls": false 00:14:09.171 } 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "method": "sock_impl_set_options", 00:14:09.171 "params": { 00:14:09.171 "impl_name": "uring", 00:14:09.171 "recv_buf_size": 2097152, 00:14:09.171 "send_buf_size": 2097152, 00:14:09.171 "enable_recv_pipe": true, 00:14:09.171 "enable_quickack": false, 00:14:09.171 "enable_placement_id": 0, 00:14:09.171 "enable_zerocopy_send_server": false, 00:14:09.171 "enable_zerocopy_send_client": false, 00:14:09.171 "zerocopy_threshold": 0, 00:14:09.171 "tls_version": 0, 00:14:09.171 "enable_ktls": false 00:14:09.171 } 00:14:09.171 } 00:14:09.171 ] 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "subsystem": "vmd", 00:14:09.171 "config": [] 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "subsystem": "accel", 00:14:09.171 "config": [ 00:14:09.171 { 00:14:09.171 "method": "accel_set_options", 00:14:09.171 "params": { 00:14:09.171 "small_cache_size": 128, 00:14:09.171 "large_cache_size": 16, 00:14:09.171 "task_count": 2048, 00:14:09.171 "sequence_count": 2048, 00:14:09.171 "buf_count": 2048 00:14:09.171 } 00:14:09.171 } 00:14:09.171 ] 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "subsystem": "bdev", 00:14:09.171 "config": [ 00:14:09.171 { 00:14:09.171 "method": "bdev_set_options", 00:14:09.171 "params": { 00:14:09.171 "bdev_io_pool_size": 65535, 00:14:09.171 "bdev_io_cache_size": 256, 00:14:09.171 "bdev_auto_examine": true, 00:14:09.171 "iobuf_small_cache_size": 128, 00:14:09.171 "iobuf_large_cache_size": 16 00:14:09.171 } 00:14:09.171 }, 00:14:09.171 { 00:14:09.171 "method": "bdev_raid_set_options", 00:14:09.171 "params": { 00:14:09.171 "process_window_size_kb": 1024 00:14:09.171 } 00:14:09.171 }, 00:14:09.171 { 00:14:09.172 "method": "bdev_iscsi_set_options", 00:14:09.172 "params": { 00:14:09.172 "timeout_sec": 30 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "bdev_nvme_set_options", 00:14:09.172 "params": { 00:14:09.172 "action_on_timeout": "none", 00:14:09.172 "timeout_us": 0, 00:14:09.172 "timeout_admin_us": 0, 00:14:09.172 "keep_alive_timeout_ms": 10000, 00:14:09.172 "arbitration_burst": 0, 00:14:09.172 "low_priority_weight": 0, 00:14:09.172 "medium_priority_weight": 0, 00:14:09.172 "high_priority_weight": 0, 00:14:09.172 "nvme_adminq_poll_period_us": 10000, 00:14:09.172 "nvme_ioq_poll_period_us": 0, 00:14:09.172 "io_queue_requests": 0, 00:14:09.172 "delay_cmd_submit": true, 00:14:09.172 "transport_retry_count": 4, 00:14:09.172 "bdev_retry_count": 3, 00:14:09.172 "transport_ack_timeout": 0, 00:14:09.172 "ctrlr_loss_timeout_sec": 0, 00:14:09.172 "reconnect_delay_sec": 0, 00:14:09.172 "fast_io_fail_timeout_sec": 0, 00:14:09.172 "disable_auto_failback": false, 00:14:09.172 "generate_uuids": false, 00:14:09.172 "transport_tos": 0, 00:14:09.172 "nvme_error_stat": false, 00:14:09.172 "rdma_srq_size": 0, 00:14:09.172 "io_path_stat": false, 00:14:09.172 "allow_accel_sequence": false, 00:14:09.172 "rdma_max_cq_size": 0, 00:14:09.172 "rdma_cm_event_timeout_ms": 0, 00:14:09.172 "dhchap_digests": [ 00:14:09.172 "sha256", 00:14:09.172 "sha384", 00:14:09.172 "sha512" 00:14:09.172 ], 00:14:09.172 "dhchap_dhgroups": [ 00:14:09.172 "null", 00:14:09.172 "ffdhe2048", 00:14:09.172 "ffdhe3072", 00:14:09.172 "ffdhe4096", 00:14:09.172 "ffdhe6144", 00:14:09.172 "ffdhe8192" 00:14:09.172 ] 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "bdev_nvme_set_hotplug", 00:14:09.172 "params": { 00:14:09.172 "period_us": 100000, 00:14:09.172 "enable": false 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "bdev_malloc_create", 00:14:09.172 "params": { 00:14:09.172 "name": "malloc0", 00:14:09.172 "num_blocks": 8192, 00:14:09.172 "block_size": 4096, 00:14:09.172 "physical_block_size": 4096, 00:14:09.172 "uuid": "ad8f7c9a-1e61-437a-b7f8-37817654be0e", 00:14:09.172 "optimal_io_boundary": 0 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "bdev_wait_for_examine" 00:14:09.172 } 00:14:09.172 ] 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "subsystem": "nbd", 00:14:09.172 "config": [] 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "subsystem": "scheduler", 00:14:09.172 "config": [ 00:14:09.172 { 00:14:09.172 "method": "framework_set_scheduler", 00:14:09.172 "params": { 00:14:09.172 "name": "static" 00:14:09.172 } 00:14:09.172 } 00:14:09.172 ] 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "subsystem": "nvmf", 00:14:09.172 "config": [ 00:14:09.172 { 00:14:09.172 "method": "nvmf_set_config", 00:14:09.172 "params": { 00:14:09.172 "discovery_filter": "match_any", 00:14:09.172 "admin_cmd_passthru": { 00:14:09.172 "identify_ctrlr": false 00:14:09.172 } 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "nvmf_set_max_subsystems", 00:14:09.172 "params": { 00:14:09.172 "max_subsystems": 1024 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "nvmf_set_crdt", 00:14:09.172 "params": { 00:14:09.172 "crdt1": 0, 00:14:09.172 "crdt2": 0, 00:14:09.172 "crdt3": 0 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "nvmf_create_transport", 00:14:09.172 "params": { 00:14:09.172 "trtype": "TCP", 00:14:09.172 "max_queue_depth": 128, 00:14:09.172 "max_io_qpairs_per_ctrlr": 127, 00:14:09.172 "in_capsule_data_size": 4096, 00:14:09.172 "max_io_size": 131072, 00:14:09.172 "io_unit_size": 131072, 00:14:09.172 "max_aq_depth": 128, 00:14:09.172 "num_shared_buffers": 511, 00:14:09.172 "buf_cache_size": 4294967295, 00:14:09.172 "dif_insert_or_strip": false, 00:14:09.172 "zcopy": false, 00:14:09.172 "c2h_success": false, 00:14:09.172 "sock_priority": 0, 00:14:09.172 "abort_timeout_sec": 1, 00:14:09.172 "ack_timeout": 0, 00:14:09.172 "data_wr_pool_size": 0 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "nvmf_create_subsystem", 00:14:09.172 "params": { 00:14:09.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.172 "allow_any_host": false, 00:14:09.172 "serial_number": "SPDK00000000000001", 00:14:09.172 "model_number": "SPDK bdev Controller", 00:14:09.172 "max_namespaces": 10, 00:14:09.172 "min_cntlid": 1, 00:14:09.172 "max_cntlid": 65519, 00:14:09.172 "ana_reporting": false 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "nvmf_subsystem_add_host", 00:14:09.172 "params": { 00:14:09.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.172 "host": "nqn.2016-06.io.spdk:host1", 00:14:09.172 "psk": "/tmp/tmp.Z1hB88Ybzi" 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "nvmf_subsystem_add_ns", 00:14:09.172 "params": { 00:14:09.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.172 "namespace": { 00:14:09.172 "nsid": 1, 00:14:09.172 "bdev_name": "malloc0", 00:14:09.172 "nguid": "AD8F7C9A1E61437AB7F837817654BE0E", 00:14:09.172 "uuid": "ad8f7c9a-1e61-437a-b7f8-37817654be0e", 00:14:09.172 "no_auto_visible": false 00:14:09.172 } 00:14:09.172 } 00:14:09.172 }, 00:14:09.172 { 00:14:09.172 "method": "nvmf_subsystem_add_listener", 00:14:09.172 "params": { 00:14:09.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.172 "listen_address": { 00:14:09.172 "trtype": "TCP", 00:14:09.172 "adrfam": "IPv4", 00:14:09.172 "traddr": "10.0.0.2", 00:14:09.172 "trsvcid": "4420" 00:14:09.172 }, 00:14:09.172 "secure_channel": true 00:14:09.172 } 00:14:09.172 } 00:14:09.172 ] 00:14:09.172 } 00:14:09.172 ] 00:14:09.172 }' 00:14:09.172 18:35:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:09.431 18:35:22 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:09.431 "subsystems": [ 00:14:09.431 { 00:14:09.431 "subsystem": "keyring", 00:14:09.431 "config": [] 00:14:09.431 }, 00:14:09.431 { 00:14:09.431 "subsystem": "iobuf", 00:14:09.431 "config": [ 00:14:09.431 { 00:14:09.431 "method": "iobuf_set_options", 00:14:09.431 "params": { 00:14:09.431 "small_pool_count": 8192, 00:14:09.431 "large_pool_count": 1024, 00:14:09.431 "small_bufsize": 8192, 00:14:09.431 "large_bufsize": 135168 00:14:09.431 } 00:14:09.431 } 00:14:09.431 ] 00:14:09.431 }, 00:14:09.431 { 00:14:09.431 "subsystem": "sock", 00:14:09.431 "config": [ 00:14:09.431 { 00:14:09.431 "method": "sock_set_default_impl", 00:14:09.431 "params": { 00:14:09.431 "impl_name": "uring" 00:14:09.431 } 00:14:09.431 }, 00:14:09.431 { 00:14:09.431 "method": "sock_impl_set_options", 00:14:09.431 "params": { 00:14:09.431 "impl_name": "ssl", 00:14:09.431 "recv_buf_size": 4096, 00:14:09.431 "send_buf_size": 4096, 00:14:09.431 "enable_recv_pipe": true, 00:14:09.431 "enable_quickack": false, 00:14:09.431 "enable_placement_id": 0, 00:14:09.431 "enable_zerocopy_send_server": true, 00:14:09.431 "enable_zerocopy_send_client": false, 00:14:09.431 "zerocopy_threshold": 0, 00:14:09.431 "tls_version": 0, 00:14:09.431 "enable_ktls": false 00:14:09.431 } 00:14:09.431 }, 00:14:09.431 { 00:14:09.431 "method": "sock_impl_set_options", 00:14:09.431 "params": { 00:14:09.431 "impl_name": "posix", 00:14:09.431 "recv_buf_size": 2097152, 00:14:09.431 "send_buf_size": 2097152, 00:14:09.431 "enable_recv_pipe": true, 00:14:09.431 "enable_quickack": false, 00:14:09.431 "enable_placement_id": 0, 00:14:09.431 "enable_zerocopy_send_server": true, 00:14:09.431 "enable_zerocopy_send_client": false, 00:14:09.431 "zerocopy_threshold": 0, 00:14:09.431 "tls_version": 0, 00:14:09.431 "enable_ktls": false 00:14:09.431 } 00:14:09.431 }, 00:14:09.431 { 00:14:09.431 "method": "sock_impl_set_options", 00:14:09.431 "params": { 00:14:09.432 "impl_name": "uring", 00:14:09.432 "recv_buf_size": 2097152, 00:14:09.432 "send_buf_size": 2097152, 00:14:09.432 "enable_recv_pipe": true, 00:14:09.432 "enable_quickack": false, 00:14:09.432 "enable_placement_id": 0, 00:14:09.432 "enable_zerocopy_send_server": false, 00:14:09.432 "enable_zerocopy_send_client": false, 00:14:09.432 "zerocopy_threshold": 0, 00:14:09.432 "tls_version": 0, 00:14:09.432 "enable_ktls": false 00:14:09.432 } 00:14:09.432 } 00:14:09.432 ] 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "subsystem": "vmd", 00:14:09.432 "config": [] 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "subsystem": "accel", 00:14:09.432 "config": [ 00:14:09.432 { 00:14:09.432 "method": "accel_set_options", 00:14:09.432 "params": { 00:14:09.432 "small_cache_size": 128, 00:14:09.432 "large_cache_size": 16, 00:14:09.432 "task_count": 2048, 00:14:09.432 "sequence_count": 2048, 00:14:09.432 "buf_count": 2048 00:14:09.432 } 00:14:09.432 } 00:14:09.432 ] 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "subsystem": "bdev", 00:14:09.432 "config": [ 00:14:09.432 { 00:14:09.432 "method": "bdev_set_options", 00:14:09.432 "params": { 00:14:09.432 "bdev_io_pool_size": 65535, 00:14:09.432 "bdev_io_cache_size": 256, 00:14:09.432 "bdev_auto_examine": true, 00:14:09.432 "iobuf_small_cache_size": 128, 00:14:09.432 "iobuf_large_cache_size": 16 00:14:09.432 } 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "method": "bdev_raid_set_options", 00:14:09.432 "params": { 00:14:09.432 "process_window_size_kb": 1024 00:14:09.432 } 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "method": "bdev_iscsi_set_options", 00:14:09.432 "params": { 00:14:09.432 "timeout_sec": 30 00:14:09.432 } 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "method": "bdev_nvme_set_options", 00:14:09.432 "params": { 00:14:09.432 "action_on_timeout": "none", 00:14:09.432 "timeout_us": 0, 00:14:09.432 "timeout_admin_us": 0, 00:14:09.432 "keep_alive_timeout_ms": 10000, 00:14:09.432 "arbitration_burst": 0, 00:14:09.432 "low_priority_weight": 0, 00:14:09.432 "medium_priority_weight": 0, 00:14:09.432 "high_priority_weight": 0, 00:14:09.432 "nvme_adminq_poll_period_us": 10000, 00:14:09.432 "nvme_ioq_poll_period_us": 0, 00:14:09.432 "io_queue_requests": 512, 00:14:09.432 "delay_cmd_submit": true, 00:14:09.432 "transport_retry_count": 4, 00:14:09.432 "bdev_retry_count": 3, 00:14:09.432 "transport_ack_timeout": 0, 00:14:09.432 "ctrlr_loss_timeout_sec": 0, 00:14:09.432 "reconnect_delay_sec": 0, 00:14:09.432 "fast_io_fail_timeout_sec": 0, 00:14:09.432 "disable_auto_failback": false, 00:14:09.432 "generate_uuids": false, 00:14:09.432 "transport_tos": 0, 00:14:09.432 "nvme_error_stat": false, 00:14:09.432 "rdma_srq_size": 0, 00:14:09.432 "io_path_stat": false, 00:14:09.432 "allow_accel_sequence": false, 00:14:09.432 "rdma_max_cq_size": 0, 00:14:09.432 "rdma_cm_event_timeout_ms": 0, 00:14:09.432 "dhchap_digests": [ 00:14:09.432 "sha256", 00:14:09.432 "sha384", 00:14:09.432 "sha512" 00:14:09.432 ], 00:14:09.432 "dhchap_dhgroups": [ 00:14:09.432 "null", 00:14:09.432 "ffdhe2048", 00:14:09.432 "ffdhe3072", 00:14:09.432 "ffdhe4096", 00:14:09.432 "ffdhe6144", 00:14:09.432 "ffdhe8192" 00:14:09.432 ] 00:14:09.432 } 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "method": "bdev_nvme_attach_controller", 00:14:09.432 "params": { 00:14:09.432 "name": "TLSTEST", 00:14:09.432 "trtype": "TCP", 00:14:09.432 "adrfam": "IPv4", 00:14:09.432 "traddr": "10.0.0.2", 00:14:09.432 "trsvcid": "4420", 00:14:09.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:09.432 "prchk_reftag": false, 00:14:09.432 "prchk_guard": false, 00:14:09.432 "ctrlr_loss_timeout_sec": 0, 00:14:09.432 "reconnect_delay_sec": 0, 00:14:09.432 "fast_io_fail_timeout_sec": 0, 00:14:09.432 "psk": "/tmp/tmp.Z1hB88Ybzi", 00:14:09.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:09.432 "hdgst": false, 00:14:09.432 "ddgst": false 00:14:09.432 } 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "method": "bdev_nvme_set_hotplug", 00:14:09.432 "params": { 00:14:09.432 "period_us": 100000, 00:14:09.432 "enable": false 00:14:09.432 } 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "method": "bdev_wait_for_examine" 00:14:09.432 } 00:14:09.432 ] 00:14:09.432 }, 00:14:09.432 { 00:14:09.432 "subsystem": "nbd", 00:14:09.432 "config": [] 00:14:09.432 } 00:14:09.432 ] 00:14:09.432 }' 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73729 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73729 ']' 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73729 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73729 00:14:09.432 killing process with pid 73729 00:14:09.432 Received shutdown signal, test time was about 10.000000 seconds 00:14:09.432 00:14:09.432 Latency(us) 00:14:09.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.432 =================================================================================================================== 00:14:09.432 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73729' 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73729 00:14:09.432 [2024-05-16 18:35:22.856335] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:09.432 18:35:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73729 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73675 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73675 ']' 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73675 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73675 00:14:10.000 killing process with pid 73675 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73675' 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73675 00:14:10.000 [2024-05-16 18:35:23.239695] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:10.000 [2024-05-16 18:35:23.239750] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73675 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:10.000 18:35:23 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:10.000 "subsystems": [ 00:14:10.000 { 00:14:10.000 "subsystem": "keyring", 00:14:10.000 "config": [] 00:14:10.000 }, 00:14:10.000 { 00:14:10.000 "subsystem": "iobuf", 00:14:10.000 "config": [ 00:14:10.000 { 00:14:10.000 "method": "iobuf_set_options", 00:14:10.000 "params": { 00:14:10.000 "small_pool_count": 8192, 00:14:10.000 "large_pool_count": 1024, 00:14:10.000 "small_bufsize": 8192, 00:14:10.000 "large_bufsize": 135168 00:14:10.000 } 00:14:10.000 } 00:14:10.000 ] 00:14:10.000 }, 00:14:10.000 { 00:14:10.000 "subsystem": "sock", 00:14:10.000 "config": [ 00:14:10.000 { 00:14:10.000 "method": "sock_set_default_impl", 00:14:10.000 "params": { 00:14:10.000 "impl_name": "uring" 00:14:10.000 } 00:14:10.000 }, 00:14:10.000 { 00:14:10.000 "method": "sock_impl_set_options", 00:14:10.000 "params": { 00:14:10.000 "impl_name": "ssl", 00:14:10.000 "recv_buf_size": 4096, 00:14:10.000 "send_buf_size": 4096, 00:14:10.000 "enable_recv_pipe": true, 00:14:10.000 "enable_quickack": false, 00:14:10.000 "enable_placement_id": 0, 00:14:10.000 "enable_zerocopy_send_server": true, 00:14:10.000 "enable_zerocopy_send_client": false, 00:14:10.000 "zerocopy_threshold": 0, 00:14:10.000 "tls_version": 0, 00:14:10.000 "enable_ktls": false 00:14:10.000 } 00:14:10.000 }, 00:14:10.000 { 00:14:10.000 "method": "sock_impl_set_options", 00:14:10.000 "params": { 00:14:10.000 "impl_name": "posix", 00:14:10.000 "recv_buf_size": 2097152, 00:14:10.000 "send_buf_size": 2097152, 00:14:10.000 "enable_recv_pipe": true, 00:14:10.000 "enable_quickack": false, 00:14:10.000 "enable_placement_id": 0, 00:14:10.000 "enable_zerocopy_send_server": true, 00:14:10.000 "enable_zerocopy_send_client": false, 00:14:10.000 "zerocopy_threshold": 0, 00:14:10.000 "tls_version": 0, 00:14:10.000 "enable_ktls": false 00:14:10.000 } 00:14:10.000 }, 00:14:10.000 { 00:14:10.000 "method": "sock_impl_set_options", 00:14:10.000 "params": { 00:14:10.000 "impl_name": "uring", 00:14:10.000 "recv_buf_size": 2097152, 00:14:10.000 "send_buf_size": 2097152, 00:14:10.000 "enable_recv_pipe": true, 00:14:10.000 "enable_quickack": false, 00:14:10.000 "enable_placement_id": 0, 00:14:10.000 "enable_zerocopy_send_server": false, 00:14:10.000 "enable_zerocopy_send_client": false, 00:14:10.000 "zerocopy_threshold": 0, 00:14:10.000 "tls_version": 0, 00:14:10.000 "enable_ktls": false 00:14:10.000 } 00:14:10.000 } 00:14:10.000 ] 00:14:10.000 }, 00:14:10.000 { 00:14:10.000 "subsystem": "vmd", 00:14:10.000 "config": [] 00:14:10.000 }, 00:14:10.000 { 00:14:10.000 "subsystem": "accel", 00:14:10.000 "config": [ 00:14:10.000 { 00:14:10.000 "method": "accel_set_options", 00:14:10.000 "params": { 00:14:10.000 "small_cache_size": 128, 00:14:10.000 "large_cache_size": 16, 00:14:10.001 "task_count": 2048, 00:14:10.001 "sequence_count": 2048, 00:14:10.001 "buf_count": 2048 00:14:10.001 } 00:14:10.001 } 00:14:10.001 ] 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "subsystem": "bdev", 00:14:10.001 "config": [ 00:14:10.001 { 00:14:10.001 "method": "bdev_set_options", 00:14:10.001 "params": { 00:14:10.001 "bdev_io_pool_size": 65535, 00:14:10.001 "bdev_io_cache_size": 256, 00:14:10.001 "bdev_auto_examine": true, 00:14:10.001 "iobuf_small_cache_size": 128, 00:14:10.001 "iobuf_large_cache_size": 16 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "bdev_raid_set_options", 00:14:10.001 "params": { 00:14:10.001 "process_window_size_kb": 1024 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "bdev_iscsi_set_options", 00:14:10.001 "params": { 00:14:10.001 "timeout_sec": 30 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "bdev_nvme_set_options", 00:14:10.001 "params": { 00:14:10.001 "action_on_timeout": "none", 00:14:10.001 "timeout_us": 0, 00:14:10.001 "timeout_admin_us": 0, 00:14:10.001 "keep_alive_timeout_ms": 10000, 00:14:10.001 "arbitration_burst": 0, 00:14:10.001 "low_priority_weight": 0, 00:14:10.001 "medium_priority_weight": 0, 00:14:10.001 "high_priority_weight": 0, 00:14:10.001 "nvme_adminq_poll_period_us": 10000, 00:14:10.001 "nvme_ioq_poll_period_us": 0, 00:14:10.001 "io_queue_requests": 0, 00:14:10.001 "delay_cmd_submit": true, 00:14:10.001 "transport_retry_count": 4, 00:14:10.001 "bdev_retry_count": 3, 00:14:10.001 "transport_ack_timeout": 0, 00:14:10.001 "ctrlr_loss_timeout_sec": 0, 00:14:10.001 "reconnect_delay_sec": 0, 00:14:10.001 "fast_io_fail_timeout_sec": 0, 00:14:10.001 "disable_auto_failback": false, 00:14:10.001 "generate_uuids": false, 00:14:10.001 "transport_tos": 0, 00:14:10.001 "nvme_error_stat": false, 00:14:10.001 "rdma_srq_size": 0, 00:14:10.001 "io_path_stat": false, 00:14:10.001 "allow_accel_sequence": false, 00:14:10.001 "rdma_max_cq_size": 0, 00:14:10.001 "rdma_cm_event_timeout_ms": 0, 00:14:10.001 "dhchap_digests": [ 00:14:10.001 "sha256", 00:14:10.001 "sha384", 00:14:10.001 "sha512" 00:14:10.001 ], 00:14:10.001 "dhchap_dhgroups": [ 00:14:10.001 "null", 00:14:10.001 "ffdhe2048", 00:14:10.001 "ffdhe3072", 00:14:10.001 "ffdhe4096", 00:14:10.001 "ffdhe6144", 00:14:10.001 "ffdhe8192" 00:14:10.001 ] 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "bdev_nvme_set_hotplug", 00:14:10.001 "params": { 00:14:10.001 "period_us": 100000, 00:14:10.001 "enable": false 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "bdev_malloc_create", 00:14:10.001 "params": { 00:14:10.001 "name": "malloc0", 00:14:10.001 "num_blocks": 8192, 00:14:10.001 "block_size": 4096, 00:14:10.001 "physical_block_size": 4096, 00:14:10.001 "uuid": "ad8f7c9a-1e61-437a-b7f8-37817654be0e", 00:14:10.001 "optimal_io_boundary": 0 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "bdev_wait_for_examine" 00:14:10.001 } 00:14:10.001 ] 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "subsystem": "nbd", 00:14:10.001 "config": [] 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "subsystem": "scheduler", 00:14:10.001 "config": [ 00:14:10.001 { 00:14:10.001 "method": "framework_set_scheduler", 00:14:10.001 "params": { 00:14:10.001 "name": "static" 00:14:10.001 } 00:14:10.001 } 00:14:10.001 ] 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "subsystem": "nvmf", 00:14:10.001 "config": [ 00:14:10.001 { 00:14:10.001 "method": "nvmf_set_config", 00:14:10.001 "params": { 00:14:10.001 "discovery_filter": "match_any", 00:14:10.001 "admin_cmd_passthru": { 00:14:10.001 "identify_ctrlr": false 00:14:10.001 } 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "nvmf_set_max_subsystems", 00:14:10.001 "params": { 00:14:10.001 "max_subsystems": 1024 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "nvmf_set_crdt", 00:14:10.001 "params": { 00:14:10.001 "crdt1": 0, 00:14:10.001 "crdt2": 0, 00:14:10.001 "crdt3": 0 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "nvmf_create_transport", 00:14:10.001 "params": { 00:14:10.001 "trtype": "TCP", 00:14:10.001 "max_queue_depth": 128, 00:14:10.001 "max_io_qpairs_per_ctrlr": 127, 00:14:10.001 "in_capsule_data_size": 4096, 00:14:10.001 "max_io_size": 131072, 00:14:10.001 "io_unit_size": 131072, 00:14:10.001 "max_aq_depth": 128, 00:14:10.001 "num_shared_buffers": 511, 00:14:10.001 "buf_cache_size": 4294967295, 00:14:10.001 "dif_insert_or_strip": false, 00:14:10.001 "zcopy": false, 00:14:10.001 "c2h_success": false, 00:14:10.001 "sock_priority": 0, 00:14:10.001 "abort_timeout_sec": 1, 00:14:10.001 "ack_timeout": 0, 00:14:10.001 "data_wr_pool_size": 0 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "nvmf_create_subsystem", 00:14:10.001 "params": { 00:14:10.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.001 "allow_any_host": false, 00:14:10.001 "serial_number": "SPDK00000000000001", 00:14:10.001 "model_number": "SPDK bdev Controller", 00:14:10.001 "max_namespaces": 10, 00:14:10.001 "min_cntlid": 1, 00:14:10.001 "max_cntlid": 65519, 00:14:10.001 "ana_reporting": false 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "nvmf_subsystem_add_host", 00:14:10.001 "params": { 00:14:10.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.001 "host": "nqn.2016-06.io.spdk:host1", 00:14:10.001 "psk": "/tmp/tmp.Z1hB88Ybzi" 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "nvmf_subsystem_add_ns", 00:14:10.001 "params": { 00:14:10.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.001 "namespace": { 00:14:10.001 "nsid": 1, 00:14:10.001 "bdev_name": "malloc0", 00:14:10.001 "nguid": "AD8F7C9A1E61437AB7F837817654BE0E", 00:14:10.001 "uuid": "ad8f7c9a-1e61-437a-b7f8-37817654be0e", 00:14:10.001 "no_auto_visible": false 00:14:10.001 } 00:14:10.001 } 00:14:10.001 }, 00:14:10.001 { 00:14:10.001 "method": "nvmf_subsystem_add_listener", 00:14:10.001 "params": { 00:14:10.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.001 "listen_address": { 00:14:10.001 "trtype": "TCP", 00:14:10.001 "adrfam": "IPv4", 00:14:10.001 "traddr": "10.0.0.2", 00:14:10.001 "trsvcid": "4420" 00:14:10.001 }, 00:14:10.001 "secure_channel": true 00:14:10.001 } 00:14:10.001 } 00:14:10.001 ] 00:14:10.001 } 00:14:10.001 ] 00:14:10.001 }' 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73779 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73779 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73779 ']' 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:10.001 18:35:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:10.260 [2024-05-16 18:35:23.569706] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:10.260 [2024-05-16 18:35:23.569882] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.260 [2024-05-16 18:35:23.719696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.542 [2024-05-16 18:35:23.831879] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:10.542 [2024-05-16 18:35:23.831940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:10.542 [2024-05-16 18:35:23.831953] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:10.542 [2024-05-16 18:35:23.831962] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:10.542 [2024-05-16 18:35:23.831969] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:10.542 [2024-05-16 18:35:23.832058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:10.542 [2024-05-16 18:35:23.998561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:10.852 [2024-05-16 18:35:24.069400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:10.852 [2024-05-16 18:35:24.085334] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:10.852 [2024-05-16 18:35:24.101287] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:10.852 [2024-05-16 18:35:24.101408] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:10.852 [2024-05-16 18:35:24.101601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:11.110 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:11.110 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:11.110 18:35:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:11.110 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:11.110 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=73811 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 73811 /var/tmp/bdevperf.sock 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73811 ']' 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:11.369 18:35:24 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:11.369 "subsystems": [ 00:14:11.369 { 00:14:11.369 "subsystem": "keyring", 00:14:11.369 "config": [] 00:14:11.369 }, 00:14:11.369 { 00:14:11.369 "subsystem": "iobuf", 00:14:11.369 "config": [ 00:14:11.369 { 00:14:11.369 "method": "iobuf_set_options", 00:14:11.369 "params": { 00:14:11.369 "small_pool_count": 8192, 00:14:11.369 "large_pool_count": 1024, 00:14:11.369 "small_bufsize": 8192, 00:14:11.369 "large_bufsize": 135168 00:14:11.369 } 00:14:11.369 } 00:14:11.369 ] 00:14:11.369 }, 00:14:11.369 { 00:14:11.369 "subsystem": "sock", 00:14:11.369 "config": [ 00:14:11.369 { 00:14:11.369 "method": "sock_set_default_impl", 00:14:11.369 "params": { 00:14:11.369 "impl_name": "uring" 00:14:11.369 } 00:14:11.369 }, 00:14:11.369 { 00:14:11.369 "method": "sock_impl_set_options", 00:14:11.369 "params": { 00:14:11.369 "impl_name": "ssl", 00:14:11.369 "recv_buf_size": 4096, 00:14:11.369 "send_buf_size": 4096, 00:14:11.369 "enable_recv_pipe": true, 00:14:11.369 "enable_quickack": false, 00:14:11.369 "enable_placement_id": 0, 00:14:11.369 "enable_zerocopy_send_server": true, 00:14:11.369 "enable_zerocopy_send_client": false, 00:14:11.369 "zerocopy_threshold": 0, 00:14:11.369 "tls_version": 0, 00:14:11.370 "enable_ktls": false 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "sock_impl_set_options", 00:14:11.370 "params": { 00:14:11.370 "impl_name": "posix", 00:14:11.370 "recv_buf_size": 2097152, 00:14:11.370 "send_buf_size": 2097152, 00:14:11.370 "enable_recv_pipe": true, 00:14:11.370 "enable_quickack": false, 00:14:11.370 "enable_placement_id": 0, 00:14:11.370 "enable_zerocopy_send_server": true, 00:14:11.370 "enable_zerocopy_send_client": false, 00:14:11.370 "zerocopy_threshold": 0, 00:14:11.370 "tls_version": 0, 00:14:11.370 "enable_ktls": false 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "sock_impl_set_options", 00:14:11.370 "params": { 00:14:11.370 "impl_name": "uring", 00:14:11.370 "recv_buf_size": 2097152, 00:14:11.370 "send_buf_size": 2097152, 00:14:11.370 "enable_recv_pipe": true, 00:14:11.370 "enable_quickack": false, 00:14:11.370 "enable_placement_id": 0, 00:14:11.370 "enable_zerocopy_send_server": false, 00:14:11.370 "enable_zerocopy_send_client": false, 00:14:11.370 "zerocopy_threshold": 0, 00:14:11.370 "tls_version": 0, 00:14:11.370 "enable_ktls": false 00:14:11.370 } 00:14:11.370 } 00:14:11.370 ] 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "subsystem": "vmd", 00:14:11.370 "config": [] 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "subsystem": "accel", 00:14:11.370 "config": [ 00:14:11.370 { 00:14:11.370 "method": "accel_set_options", 00:14:11.370 "params": { 00:14:11.370 "small_cache_size": 128, 00:14:11.370 "large_cache_size": 16, 00:14:11.370 "task_count": 2048, 00:14:11.370 "sequence_count": 2048, 00:14:11.370 "buf_count": 2048 00:14:11.370 } 00:14:11.370 } 00:14:11.370 ] 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "subsystem": "bdev", 00:14:11.370 "config": [ 00:14:11.370 { 00:14:11.370 "method": "bdev_set_options", 00:14:11.370 "params": { 00:14:11.370 "bdev_io_pool_size": 65535, 00:14:11.370 "bdev_io_cache_size": 256, 00:14:11.370 "bdev_auto_examine": true, 00:14:11.370 "iobuf_small_cache_size": 128, 00:14:11.370 "iobuf_large_cache_size": 16 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "bdev_raid_set_options", 00:14:11.370 "params": { 00:14:11.370 "process_window_size_kb": 1024 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "bdev_iscsi_set_options", 00:14:11.370 "params": { 00:14:11.370 "timeout_sec": 30 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "bdev_nvme_set_options", 00:14:11.370 "params": { 00:14:11.370 "action_on_timeout": "none", 00:14:11.370 "timeout_us": 0, 00:14:11.370 "timeout_admin_us": 0, 00:14:11.370 "keep_alive_timeout_ms": 10000, 00:14:11.370 "arbitration_burst": 0, 00:14:11.370 "low_priority_weight": 0, 00:14:11.370 "medium_priority_weight": 0, 00:14:11.370 "high_priority_weight": 0, 00:14:11.370 "nvme_adminq_poll_period_us": 10000, 00:14:11.370 "nvme_ioq_poll_period_us": 0, 00:14:11.370 "io_queue_requests": 512, 00:14:11.370 "delay_cmd_submit": true, 00:14:11.370 "transport_retry_count": 4, 00:14:11.370 "bdev_retry_count": 3, 00:14:11.370 "transport_ack_timeout": 0, 00:14:11.370 "ctrlr_loss_timeout_sec": 0, 00:14:11.370 "reconnect_delay_sec": 0, 00:14:11.370 "fast_io_fail_timeout_sec": 0, 00:14:11.370 "disable_auto_failback": false, 00:14:11.370 "generate_uuids": false, 00:14:11.370 "transport_tos": 0, 00:14:11.370 "nvme_error_stat": false, 00:14:11.370 "rdma_srq_size": 0, 00:14:11.370 "io_path_stat": false, 00:14:11.370 "allow_accel_sequence": false, 00:14:11.370 "rdma_max_cq_size": 0, 00:14:11.370 "rdma_cm_event_timeout_ms": 0, 00:14:11.370 "dhchap_digests": [ 00:14:11.370 "sha256", 00:14:11.370 "sha384", 00:14:11.370 "sha512" 00:14:11.370 ], 00:14:11.370 "dhchap_dhgroups": [ 00:14:11.370 "null", 00:14:11.370 "ffdhe2048", 00:14:11.370 "ffdhe3072", 00:14:11.370 "ffdhe4096", 00:14:11.370 "ffdhe6144", 00:14:11.370 "ffdhe8192" 00:14:11.370 ] 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "bdev_nvme_attach_controller", 00:14:11.370 "params": { 00:14:11.370 "name": "TLSTEST", 00:14:11.370 "trtype": "TCP", 00:14:11.370 "adrfam": "IPv4", 00:14:11.370 "traddr": "10.0.0.2", 00:14:11.370 "trsvcid": "4420", 00:14:11.370 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.370 "prchk_reftag": false, 00:14:11.370 "prchk_guard": false, 00:14:11.370 "ctrlr_loss_timeout_sec": 0, 00:14:11.370 "reconnect_delay_sec": 0, 00:14:11.370 "fast_io_fail_timeout_sec": 0, 00:14:11.370 "psk": "/tmp/tmp.Z1hB88Ybzi", 00:14:11.370 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.370 "hdgst": false, 00:14:11.370 "ddgst": false 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "bdev_nvme_set_hotplug", 00:14:11.370 "params": { 00:14:11.370 "period_us": 100000, 00:14:11.370 "enable": false 00:14:11.370 } 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "method": "bdev_wait_for_examine" 00:14:11.370 } 00:14:11.370 ] 00:14:11.370 }, 00:14:11.370 { 00:14:11.370 "subsystem": "nbd", 00:14:11.370 "config": [] 00:14:11.370 } 00:14:11.370 ] 00:14:11.370 }' 00:14:11.370 [2024-05-16 18:35:24.704282] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:11.370 [2024-05-16 18:35:24.705189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73811 ] 00:14:11.370 [2024-05-16 18:35:24.841862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.629 [2024-05-16 18:35:24.991300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.887 [2024-05-16 18:35:25.144339] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:11.887 [2024-05-16 18:35:25.193852] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:11.887 [2024-05-16 18:35:25.194323] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:12.454 18:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:12.454 18:35:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:12.454 18:35:25 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:12.454 Running I/O for 10 seconds... 00:14:22.422 00:14:22.422 Latency(us) 00:14:22.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.422 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:22.422 Verification LBA range: start 0x0 length 0x2000 00:14:22.422 TLSTESTn1 : 10.03 3654.49 14.28 0.00 0.00 34947.75 10009.13 37176.79 00:14:22.422 =================================================================================================================== 00:14:22.422 Total : 3654.49 14.28 0.00 0.00 34947.75 10009.13 37176.79 00:14:22.422 0 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 73811 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73811 ']' 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73811 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73811 00:14:22.422 killing process with pid 73811 00:14:22.422 Received shutdown signal, test time was about 10.000000 seconds 00:14:22.422 00:14:22.422 Latency(us) 00:14:22.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.422 =================================================================================================================== 00:14:22.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73811' 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73811 00:14:22.422 [2024-05-16 18:35:35.881118] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:22.422 18:35:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73811 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73779 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73779 ']' 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73779 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73779 00:14:22.989 killing process with pid 73779 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73779' 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73779 00:14:22.989 [2024-05-16 18:35:36.214607] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:22.989 [2024-05-16 18:35:36.214674] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:22.989 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73779 00:14:23.247 18:35:36 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73950 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73950 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 73950 ']' 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.248 18:35:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:23.248 [2024-05-16 18:35:36.617380] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:23.248 [2024-05-16 18:35:36.619226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.506 [2024-05-16 18:35:36.764153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.506 [2024-05-16 18:35:36.953540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.506 [2024-05-16 18:35:36.953814] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.506 [2024-05-16 18:35:36.953863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.506 [2024-05-16 18:35:36.953877] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.506 [2024-05-16 18:35:36.953887] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.506 [2024-05-16 18:35:36.953933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.765 [2024-05-16 18:35:37.031580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Z1hB88Ybzi 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Z1hB88Ybzi 00:14:24.331 18:35:37 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:24.589 [2024-05-16 18:35:37.984762] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:24.589 18:35:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:24.847 18:35:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:25.105 [2024-05-16 18:35:38.536890] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:25.105 [2024-05-16 18:35:38.537037] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:25.105 [2024-05-16 18:35:38.537287] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.105 18:35:38 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:25.364 malloc0 00:14:25.364 18:35:38 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:25.622 18:35:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z1hB88Ybzi 00:14:25.880 [2024-05-16 18:35:39.305694] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:25.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74004 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74004 /var/tmp/bdevperf.sock 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 74004 ']' 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.880 18:35:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:25.880 [2024-05-16 18:35:39.382104] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:26.139 [2024-05-16 18:35:39.382414] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74004 ] 00:14:26.139 [2024-05-16 18:35:39.524838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.397 [2024-05-16 18:35:39.652755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.397 [2024-05-16 18:35:39.712156] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:26.963 18:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.963 18:35:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:26.963 18:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z1hB88Ybzi 00:14:27.221 18:35:40 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:27.480 [2024-05-16 18:35:40.960783] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.738 nvme0n1 00:14:27.738 18:35:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:27.738 Running I/O for 1 seconds... 00:14:28.731 00:14:28.731 Latency(us) 00:14:28.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.731 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:28.731 Verification LBA range: start 0x0 length 0x2000 00:14:28.731 nvme0n1 : 1.02 3879.12 15.15 0.00 0.00 32650.41 6881.28 35985.22 00:14:28.731 =================================================================================================================== 00:14:28.731 Total : 3879.12 15.15 0.00 0.00 32650.41 6881.28 35985.22 00:14:28.731 0 00:14:29.004 18:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74004 00:14:29.004 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 74004 ']' 00:14:29.004 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 74004 00:14:29.004 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:29.004 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:29.004 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74004 00:14:29.004 killing process with pid 74004 00:14:29.004 Received shutdown signal, test time was about 1.000000 seconds 00:14:29.004 00:14:29.004 Latency(us) 00:14:29.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.005 =================================================================================================================== 00:14:29.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.005 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:29.005 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:29.005 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74004' 00:14:29.005 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 74004 00:14:29.005 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 74004 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 73950 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 73950 ']' 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 73950 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73950 00:14:29.264 killing process with pid 73950 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73950' 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 73950 00:14:29.264 [2024-05-16 18:35:42.586360] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:29.264 [2024-05-16 18:35:42.586425] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:29.264 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 73950 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74061 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74061 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 74061 ']' 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:29.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:29.522 18:35:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:29.522 [2024-05-16 18:35:42.991698] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:29.522 [2024-05-16 18:35:42.991809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.780 [2024-05-16 18:35:43.130694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.780 [2024-05-16 18:35:43.281597] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.780 [2024-05-16 18:35:43.281685] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.780 [2024-05-16 18:35:43.281698] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.780 [2024-05-16 18:35:43.281707] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.780 [2024-05-16 18:35:43.281714] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.780 [2024-05-16 18:35:43.281751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.038 [2024-05-16 18:35:43.356065] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.603 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.603 [2024-05-16 18:35:44.070518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.603 malloc0 00:14:30.603 [2024-05-16 18:35:44.104499] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:30.603 [2024-05-16 18:35:44.104616] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:30.603 [2024-05-16 18:35:44.104858] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74093 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74093 /var/tmp/bdevperf.sock 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 74093 ']' 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:30.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:30.862 18:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:30.862 [2024-05-16 18:35:44.190132] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:30.862 [2024-05-16 18:35:44.190220] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74093 ] 00:14:30.862 [2024-05-16 18:35:44.332065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.119 [2024-05-16 18:35:44.509844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.119 [2024-05-16 18:35:44.595329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:32.055 18:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:32.055 18:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:32.055 18:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z1hB88Ybzi 00:14:32.055 18:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:32.313 [2024-05-16 18:35:45.680627] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.313 nvme0n1 00:14:32.313 18:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:32.572 Running I/O for 1 seconds... 00:14:33.507 00:14:33.507 Latency(us) 00:14:33.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.507 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:33.507 Verification LBA range: start 0x0 length 0x2000 00:14:33.507 nvme0n1 : 1.02 3574.07 13.96 0.00 0.00 35593.81 6076.97 36461.85 00:14:33.507 =================================================================================================================== 00:14:33.507 Total : 3574.07 13.96 0.00 0.00 35593.81 6076.97 36461.85 00:14:33.507 0 00:14:33.507 18:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:33.507 18:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.507 18:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.765 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.765 18:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:33.765 "subsystems": [ 00:14:33.765 { 00:14:33.765 "subsystem": "keyring", 00:14:33.765 "config": [ 00:14:33.765 { 00:14:33.765 "method": "keyring_file_add_key", 00:14:33.765 "params": { 00:14:33.765 "name": "key0", 00:14:33.765 "path": "/tmp/tmp.Z1hB88Ybzi" 00:14:33.765 } 00:14:33.765 } 00:14:33.765 ] 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "subsystem": "iobuf", 00:14:33.765 "config": [ 00:14:33.765 { 00:14:33.765 "method": "iobuf_set_options", 00:14:33.765 "params": { 00:14:33.765 "small_pool_count": 8192, 00:14:33.765 "large_pool_count": 1024, 00:14:33.765 "small_bufsize": 8192, 00:14:33.765 "large_bufsize": 135168 00:14:33.765 } 00:14:33.765 } 00:14:33.765 ] 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "subsystem": "sock", 00:14:33.765 "config": [ 00:14:33.765 { 00:14:33.765 "method": "sock_set_default_impl", 00:14:33.765 "params": { 00:14:33.765 "impl_name": "uring" 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "sock_impl_set_options", 00:14:33.765 "params": { 00:14:33.765 "impl_name": "ssl", 00:14:33.765 "recv_buf_size": 4096, 00:14:33.765 "send_buf_size": 4096, 00:14:33.765 "enable_recv_pipe": true, 00:14:33.765 "enable_quickack": false, 00:14:33.765 "enable_placement_id": 0, 00:14:33.765 "enable_zerocopy_send_server": true, 00:14:33.765 "enable_zerocopy_send_client": false, 00:14:33.765 "zerocopy_threshold": 0, 00:14:33.765 "tls_version": 0, 00:14:33.765 "enable_ktls": false 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "sock_impl_set_options", 00:14:33.765 "params": { 00:14:33.765 "impl_name": "posix", 00:14:33.765 "recv_buf_size": 2097152, 00:14:33.765 "send_buf_size": 2097152, 00:14:33.765 "enable_recv_pipe": true, 00:14:33.765 "enable_quickack": false, 00:14:33.765 "enable_placement_id": 0, 00:14:33.765 "enable_zerocopy_send_server": true, 00:14:33.765 "enable_zerocopy_send_client": false, 00:14:33.765 "zerocopy_threshold": 0, 00:14:33.765 "tls_version": 0, 00:14:33.765 "enable_ktls": false 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "sock_impl_set_options", 00:14:33.765 "params": { 00:14:33.765 "impl_name": "uring", 00:14:33.765 "recv_buf_size": 2097152, 00:14:33.765 "send_buf_size": 2097152, 00:14:33.765 "enable_recv_pipe": true, 00:14:33.765 "enable_quickack": false, 00:14:33.765 "enable_placement_id": 0, 00:14:33.765 "enable_zerocopy_send_server": false, 00:14:33.765 "enable_zerocopy_send_client": false, 00:14:33.765 "zerocopy_threshold": 0, 00:14:33.765 "tls_version": 0, 00:14:33.765 "enable_ktls": false 00:14:33.765 } 00:14:33.765 } 00:14:33.765 ] 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "subsystem": "vmd", 00:14:33.765 "config": [] 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "subsystem": "accel", 00:14:33.765 "config": [ 00:14:33.765 { 00:14:33.765 "method": "accel_set_options", 00:14:33.765 "params": { 00:14:33.765 "small_cache_size": 128, 00:14:33.765 "large_cache_size": 16, 00:14:33.765 "task_count": 2048, 00:14:33.765 "sequence_count": 2048, 00:14:33.765 "buf_count": 2048 00:14:33.765 } 00:14:33.765 } 00:14:33.765 ] 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "subsystem": "bdev", 00:14:33.765 "config": [ 00:14:33.765 { 00:14:33.765 "method": "bdev_set_options", 00:14:33.765 "params": { 00:14:33.765 "bdev_io_pool_size": 65535, 00:14:33.765 "bdev_io_cache_size": 256, 00:14:33.765 "bdev_auto_examine": true, 00:14:33.765 "iobuf_small_cache_size": 128, 00:14:33.765 "iobuf_large_cache_size": 16 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "bdev_raid_set_options", 00:14:33.765 "params": { 00:14:33.765 "process_window_size_kb": 1024 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "bdev_iscsi_set_options", 00:14:33.765 "params": { 00:14:33.765 "timeout_sec": 30 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "bdev_nvme_set_options", 00:14:33.765 "params": { 00:14:33.765 "action_on_timeout": "none", 00:14:33.765 "timeout_us": 0, 00:14:33.765 "timeout_admin_us": 0, 00:14:33.765 "keep_alive_timeout_ms": 10000, 00:14:33.765 "arbitration_burst": 0, 00:14:33.765 "low_priority_weight": 0, 00:14:33.765 "medium_priority_weight": 0, 00:14:33.765 "high_priority_weight": 0, 00:14:33.765 "nvme_adminq_poll_period_us": 10000, 00:14:33.765 "nvme_ioq_poll_period_us": 0, 00:14:33.765 "io_queue_requests": 0, 00:14:33.765 "delay_cmd_submit": true, 00:14:33.765 "transport_retry_count": 4, 00:14:33.765 "bdev_retry_count": 3, 00:14:33.765 "transport_ack_timeout": 0, 00:14:33.765 "ctrlr_loss_timeout_sec": 0, 00:14:33.765 "reconnect_delay_sec": 0, 00:14:33.765 "fast_io_fail_timeout_sec": 0, 00:14:33.765 "disable_auto_failback": false, 00:14:33.765 "generate_uuids": false, 00:14:33.765 "transport_tos": 0, 00:14:33.765 "nvme_error_stat": false, 00:14:33.765 "rdma_srq_size": 0, 00:14:33.765 "io_path_stat": false, 00:14:33.765 "allow_accel_sequence": false, 00:14:33.765 "rdma_max_cq_size": 0, 00:14:33.765 "rdma_cm_event_timeout_ms": 0, 00:14:33.765 "dhchap_digests": [ 00:14:33.765 "sha256", 00:14:33.765 "sha384", 00:14:33.765 "sha512" 00:14:33.765 ], 00:14:33.765 "dhchap_dhgroups": [ 00:14:33.765 "null", 00:14:33.765 "ffdhe2048", 00:14:33.765 "ffdhe3072", 00:14:33.765 "ffdhe4096", 00:14:33.765 "ffdhe6144", 00:14:33.765 "ffdhe8192" 00:14:33.765 ] 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "bdev_nvme_set_hotplug", 00:14:33.765 "params": { 00:14:33.765 "period_us": 100000, 00:14:33.765 "enable": false 00:14:33.765 } 00:14:33.765 }, 00:14:33.765 { 00:14:33.765 "method": "bdev_malloc_create", 00:14:33.765 "params": { 00:14:33.765 "name": "malloc0", 00:14:33.766 "num_blocks": 8192, 00:14:33.766 "block_size": 4096, 00:14:33.766 "physical_block_size": 4096, 00:14:33.766 "uuid": "f880ca33-dd7b-4528-8eec-a1b6ca1effe1", 00:14:33.766 "optimal_io_boundary": 0 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "bdev_wait_for_examine" 00:14:33.766 } 00:14:33.766 ] 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "subsystem": "nbd", 00:14:33.766 "config": [] 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "subsystem": "scheduler", 00:14:33.766 "config": [ 00:14:33.766 { 00:14:33.766 "method": "framework_set_scheduler", 00:14:33.766 "params": { 00:14:33.766 "name": "static" 00:14:33.766 } 00:14:33.766 } 00:14:33.766 ] 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "subsystem": "nvmf", 00:14:33.766 "config": [ 00:14:33.766 { 00:14:33.766 "method": "nvmf_set_config", 00:14:33.766 "params": { 00:14:33.766 "discovery_filter": "match_any", 00:14:33.766 "admin_cmd_passthru": { 00:14:33.766 "identify_ctrlr": false 00:14:33.766 } 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "nvmf_set_max_subsystems", 00:14:33.766 "params": { 00:14:33.766 "max_subsystems": 1024 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "nvmf_set_crdt", 00:14:33.766 "params": { 00:14:33.766 "crdt1": 0, 00:14:33.766 "crdt2": 0, 00:14:33.766 "crdt3": 0 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "nvmf_create_transport", 00:14:33.766 "params": { 00:14:33.766 "trtype": "TCP", 00:14:33.766 "max_queue_depth": 128, 00:14:33.766 "max_io_qpairs_per_ctrlr": 127, 00:14:33.766 "in_capsule_data_size": 4096, 00:14:33.766 "max_io_size": 131072, 00:14:33.766 "io_unit_size": 131072, 00:14:33.766 "max_aq_depth": 128, 00:14:33.766 "num_shared_buffers": 511, 00:14:33.766 "buf_cache_size": 4294967295, 00:14:33.766 "dif_insert_or_strip": false, 00:14:33.766 "zcopy": false, 00:14:33.766 "c2h_success": false, 00:14:33.766 "sock_priority": 0, 00:14:33.766 "abort_timeout_sec": 1, 00:14:33.766 "ack_timeout": 0, 00:14:33.766 "data_wr_pool_size": 0 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "nvmf_create_subsystem", 00:14:33.766 "params": { 00:14:33.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.766 "allow_any_host": false, 00:14:33.766 "serial_number": "00000000000000000000", 00:14:33.766 "model_number": "SPDK bdev Controller", 00:14:33.766 "max_namespaces": 32, 00:14:33.766 "min_cntlid": 1, 00:14:33.766 "max_cntlid": 65519, 00:14:33.766 "ana_reporting": false 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "nvmf_subsystem_add_host", 00:14:33.766 "params": { 00:14:33.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.766 "host": "nqn.2016-06.io.spdk:host1", 00:14:33.766 "psk": "key0" 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "nvmf_subsystem_add_ns", 00:14:33.766 "params": { 00:14:33.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.766 "namespace": { 00:14:33.766 "nsid": 1, 00:14:33.766 "bdev_name": "malloc0", 00:14:33.766 "nguid": "F880CA33DD7B45288EECA1B6CA1EFFE1", 00:14:33.766 "uuid": "f880ca33-dd7b-4528-8eec-a1b6ca1effe1", 00:14:33.766 "no_auto_visible": false 00:14:33.766 } 00:14:33.766 } 00:14:33.766 }, 00:14:33.766 { 00:14:33.766 "method": "nvmf_subsystem_add_listener", 00:14:33.766 "params": { 00:14:33.766 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.766 "listen_address": { 00:14:33.766 "trtype": "TCP", 00:14:33.766 "adrfam": "IPv4", 00:14:33.766 "traddr": "10.0.0.2", 00:14:33.766 "trsvcid": "4420" 00:14:33.766 }, 00:14:33.766 "secure_channel": true 00:14:33.766 } 00:14:33.766 } 00:14:33.766 ] 00:14:33.766 } 00:14:33.766 ] 00:14:33.766 }' 00:14:33.766 18:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:34.037 18:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:34.037 "subsystems": [ 00:14:34.037 { 00:14:34.037 "subsystem": "keyring", 00:14:34.037 "config": [ 00:14:34.037 { 00:14:34.037 "method": "keyring_file_add_key", 00:14:34.037 "params": { 00:14:34.037 "name": "key0", 00:14:34.037 "path": "/tmp/tmp.Z1hB88Ybzi" 00:14:34.037 } 00:14:34.037 } 00:14:34.037 ] 00:14:34.037 }, 00:14:34.037 { 00:14:34.037 "subsystem": "iobuf", 00:14:34.037 "config": [ 00:14:34.037 { 00:14:34.037 "method": "iobuf_set_options", 00:14:34.037 "params": { 00:14:34.037 "small_pool_count": 8192, 00:14:34.037 "large_pool_count": 1024, 00:14:34.037 "small_bufsize": 8192, 00:14:34.037 "large_bufsize": 135168 00:14:34.037 } 00:14:34.037 } 00:14:34.037 ] 00:14:34.037 }, 00:14:34.037 { 00:14:34.037 "subsystem": "sock", 00:14:34.037 "config": [ 00:14:34.037 { 00:14:34.037 "method": "sock_set_default_impl", 00:14:34.037 "params": { 00:14:34.037 "impl_name": "uring" 00:14:34.037 } 00:14:34.037 }, 00:14:34.037 { 00:14:34.037 "method": "sock_impl_set_options", 00:14:34.037 "params": { 00:14:34.037 "impl_name": "ssl", 00:14:34.037 "recv_buf_size": 4096, 00:14:34.037 "send_buf_size": 4096, 00:14:34.037 "enable_recv_pipe": true, 00:14:34.037 "enable_quickack": false, 00:14:34.037 "enable_placement_id": 0, 00:14:34.037 "enable_zerocopy_send_server": true, 00:14:34.037 "enable_zerocopy_send_client": false, 00:14:34.037 "zerocopy_threshold": 0, 00:14:34.037 "tls_version": 0, 00:14:34.037 "enable_ktls": false 00:14:34.037 } 00:14:34.037 }, 00:14:34.037 { 00:14:34.037 "method": "sock_impl_set_options", 00:14:34.037 "params": { 00:14:34.037 "impl_name": "posix", 00:14:34.037 "recv_buf_size": 2097152, 00:14:34.037 "send_buf_size": 2097152, 00:14:34.037 "enable_recv_pipe": true, 00:14:34.037 "enable_quickack": false, 00:14:34.037 "enable_placement_id": 0, 00:14:34.037 "enable_zerocopy_send_server": true, 00:14:34.037 "enable_zerocopy_send_client": false, 00:14:34.037 "zerocopy_threshold": 0, 00:14:34.037 "tls_version": 0, 00:14:34.037 "enable_ktls": false 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "sock_impl_set_options", 00:14:34.038 "params": { 00:14:34.038 "impl_name": "uring", 00:14:34.038 "recv_buf_size": 2097152, 00:14:34.038 "send_buf_size": 2097152, 00:14:34.038 "enable_recv_pipe": true, 00:14:34.038 "enable_quickack": false, 00:14:34.038 "enable_placement_id": 0, 00:14:34.038 "enable_zerocopy_send_server": false, 00:14:34.038 "enable_zerocopy_send_client": false, 00:14:34.038 "zerocopy_threshold": 0, 00:14:34.038 "tls_version": 0, 00:14:34.038 "enable_ktls": false 00:14:34.038 } 00:14:34.038 } 00:14:34.038 ] 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "subsystem": "vmd", 00:14:34.038 "config": [] 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "subsystem": "accel", 00:14:34.038 "config": [ 00:14:34.038 { 00:14:34.038 "method": "accel_set_options", 00:14:34.038 "params": { 00:14:34.038 "small_cache_size": 128, 00:14:34.038 "large_cache_size": 16, 00:14:34.038 "task_count": 2048, 00:14:34.038 "sequence_count": 2048, 00:14:34.038 "buf_count": 2048 00:14:34.038 } 00:14:34.038 } 00:14:34.038 ] 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "subsystem": "bdev", 00:14:34.038 "config": [ 00:14:34.038 { 00:14:34.038 "method": "bdev_set_options", 00:14:34.038 "params": { 00:14:34.038 "bdev_io_pool_size": 65535, 00:14:34.038 "bdev_io_cache_size": 256, 00:14:34.038 "bdev_auto_examine": true, 00:14:34.038 "iobuf_small_cache_size": 128, 00:14:34.038 "iobuf_large_cache_size": 16 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "bdev_raid_set_options", 00:14:34.038 "params": { 00:14:34.038 "process_window_size_kb": 1024 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "bdev_iscsi_set_options", 00:14:34.038 "params": { 00:14:34.038 "timeout_sec": 30 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "bdev_nvme_set_options", 00:14:34.038 "params": { 00:14:34.038 "action_on_timeout": "none", 00:14:34.038 "timeout_us": 0, 00:14:34.038 "timeout_admin_us": 0, 00:14:34.038 "keep_alive_timeout_ms": 10000, 00:14:34.038 "arbitration_burst": 0, 00:14:34.038 "low_priority_weight": 0, 00:14:34.038 "medium_priority_weight": 0, 00:14:34.038 "high_priority_weight": 0, 00:14:34.038 "nvme_adminq_poll_period_us": 10000, 00:14:34.038 "nvme_ioq_poll_period_us": 0, 00:14:34.038 "io_queue_requests": 512, 00:14:34.038 "delay_cmd_submit": true, 00:14:34.038 "transport_retry_count": 4, 00:14:34.038 "bdev_retry_count": 3, 00:14:34.038 "transport_ack_timeout": 0, 00:14:34.038 "ctrlr_loss_timeout_sec": 0, 00:14:34.038 "reconnect_delay_sec": 0, 00:14:34.038 "fast_io_fail_timeout_sec": 0, 00:14:34.038 "disable_auto_failback": false, 00:14:34.038 "generate_uuids": false, 00:14:34.038 "transport_tos": 0, 00:14:34.038 "nvme_error_stat": false, 00:14:34.038 "rdma_srq_size": 0, 00:14:34.038 "io_path_stat": false, 00:14:34.038 "allow_accel_sequence": false, 00:14:34.038 "rdma_max_cq_size": 0, 00:14:34.038 "rdma_cm_event_timeout_ms": 0, 00:14:34.038 "dhchap_digests": [ 00:14:34.038 "sha256", 00:14:34.038 "sha384", 00:14:34.038 "sha512" 00:14:34.038 ], 00:14:34.038 "dhchap_dhgroups": [ 00:14:34.038 "null", 00:14:34.038 "ffdhe2048", 00:14:34.038 "ffdhe3072", 00:14:34.038 "ffdhe4096", 00:14:34.038 "ffdhe6144", 00:14:34.038 "ffdhe8192" 00:14:34.038 ] 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "bdev_nvme_attach_controller", 00:14:34.038 "params": { 00:14:34.038 "name": "nvme0", 00:14:34.038 "trtype": "TCP", 00:14:34.038 "adrfam": "IPv4", 00:14:34.038 "traddr": "10.0.0.2", 00:14:34.038 "trsvcid": "4420", 00:14:34.038 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.038 "prchk_reftag": false, 00:14:34.038 "prchk_guard": false, 00:14:34.038 "ctrlr_loss_timeout_sec": 0, 00:14:34.038 "reconnect_delay_sec": 0, 00:14:34.038 "fast_io_fail_timeout_sec": 0, 00:14:34.038 "psk": "key0", 00:14:34.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:34.038 "hdgst": false, 00:14:34.038 "ddgst": false 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "bdev_nvme_set_hotplug", 00:14:34.038 "params": { 00:14:34.038 "period_us": 100000, 00:14:34.038 "enable": false 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "bdev_enable_histogram", 00:14:34.038 "params": { 00:14:34.038 "name": "nvme0n1", 00:14:34.038 "enable": true 00:14:34.038 } 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "method": "bdev_wait_for_examine" 00:14:34.038 } 00:14:34.038 ] 00:14:34.038 }, 00:14:34.038 { 00:14:34.038 "subsystem": "nbd", 00:14:34.038 "config": [] 00:14:34.038 } 00:14:34.038 ] 00:14:34.038 }' 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74093 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 74093 ']' 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 74093 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74093 00:14:34.038 killing process with pid 74093 00:14:34.038 Received shutdown signal, test time was about 1.000000 seconds 00:14:34.038 00:14:34.038 Latency(us) 00:14:34.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.038 =================================================================================================================== 00:14:34.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74093' 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 74093 00:14:34.038 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 74093 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74061 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 74061 ']' 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 74061 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74061 00:14:34.296 killing process with pid 74061 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74061' 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 74061 00:14:34.296 [2024-05-16 18:35:47.671651] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:34.296 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 74061 00:14:34.555 18:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:34.555 18:35:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:34.555 18:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:34.555 "subsystems": [ 00:14:34.555 { 00:14:34.555 "subsystem": "keyring", 00:14:34.555 "config": [ 00:14:34.555 { 00:14:34.555 "method": "keyring_file_add_key", 00:14:34.555 "params": { 00:14:34.555 "name": "key0", 00:14:34.555 "path": "/tmp/tmp.Z1hB88Ybzi" 00:14:34.555 } 00:14:34.555 } 00:14:34.555 ] 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "subsystem": "iobuf", 00:14:34.555 "config": [ 00:14:34.555 { 00:14:34.555 "method": "iobuf_set_options", 00:14:34.555 "params": { 00:14:34.555 "small_pool_count": 8192, 00:14:34.555 "large_pool_count": 1024, 00:14:34.555 "small_bufsize": 8192, 00:14:34.555 "large_bufsize": 135168 00:14:34.555 } 00:14:34.555 } 00:14:34.555 ] 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "subsystem": "sock", 00:14:34.555 "config": [ 00:14:34.555 { 00:14:34.555 "method": "sock_set_default_impl", 00:14:34.555 "params": { 00:14:34.555 "impl_name": "uring" 00:14:34.555 } 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "method": "sock_impl_set_options", 00:14:34.555 "params": { 00:14:34.555 "impl_name": "ssl", 00:14:34.555 "recv_buf_size": 4096, 00:14:34.555 "send_buf_size": 4096, 00:14:34.555 "enable_recv_pipe": true, 00:14:34.555 "enable_quickack": false, 00:14:34.555 "enable_placement_id": 0, 00:14:34.555 "enable_zerocopy_send_server": true, 00:14:34.555 "enable_zerocopy_send_client": false, 00:14:34.555 "zerocopy_threshold": 0, 00:14:34.555 "tls_version": 0, 00:14:34.555 "enable_ktls": false 00:14:34.555 } 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "method": "sock_impl_set_options", 00:14:34.555 "params": { 00:14:34.555 "impl_name": "posix", 00:14:34.555 "recv_buf_size": 2097152, 00:14:34.555 "send_buf_size": 2097152, 00:14:34.555 "enable_recv_pipe": true, 00:14:34.555 "enable_quickack": false, 00:14:34.555 "enable_placement_id": 0, 00:14:34.555 "enable_zerocopy_send_server": true, 00:14:34.555 "enable_zerocopy_send_client": false, 00:14:34.555 "zerocopy_threshold": 0, 00:14:34.555 "tls_version": 0, 00:14:34.555 "enable_ktls": false 00:14:34.555 } 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "method": "sock_impl_set_options", 00:14:34.555 "params": { 00:14:34.555 "impl_name": "uring", 00:14:34.555 "recv_buf_size": 2097152, 00:14:34.555 "send_buf_size": 2097152, 00:14:34.555 "enable_recv_pipe": true, 00:14:34.555 "enable_quickack": false, 00:14:34.555 "enable_placement_id": 0, 00:14:34.555 "enable_zerocopy_send_server": false, 00:14:34.555 "enable_zerocopy_send_client": false, 00:14:34.555 "zerocopy_threshold": 0, 00:14:34.555 "tls_version": 0, 00:14:34.555 "enable_ktls": false 00:14:34.555 } 00:14:34.555 } 00:14:34.555 ] 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "subsystem": "vmd", 00:14:34.555 "config": [] 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "subsystem": "accel", 00:14:34.555 "config": [ 00:14:34.555 { 00:14:34.555 "method": "accel_set_options", 00:14:34.555 "params": { 00:14:34.555 "small_cache_size": 128, 00:14:34.555 "large_cache_size": 16, 00:14:34.555 "task_count": 2048, 00:14:34.555 "sequence_count": 2048, 00:14:34.555 "buf_count": 2048 00:14:34.555 } 00:14:34.555 } 00:14:34.555 ] 00:14:34.555 }, 00:14:34.555 { 00:14:34.555 "subsystem": "bdev", 00:14:34.555 "config": [ 00:14:34.555 { 00:14:34.555 "method": "bdev_set_options", 00:14:34.555 "params": { 00:14:34.555 "bdev_io_pool_size": 65535, 00:14:34.555 "bdev_io_cache_size": 256, 00:14:34.555 "bdev_auto_examine": true, 00:14:34.556 "iobuf_small_cache_size": 128, 00:14:34.556 "iobuf_large_cache_size": 16 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "bdev_raid_set_options", 00:14:34.556 "params": { 00:14:34.556 "process_window_size_kb": 1024 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "bdev_iscsi_set_options", 00:14:34.556 "params": { 00:14:34.556 "timeout_sec": 30 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "bdev_nvme_set_options", 00:14:34.556 "params": { 00:14:34.556 "action_on_timeout": "none", 00:14:34.556 "timeout_us": 0, 00:14:34.556 "timeout_admin_us": 0, 00:14:34.556 "keep_alive_timeout_ms": 10000, 00:14:34.556 "arbitration_burst": 0, 00:14:34.556 "low_priority_weight": 0, 00:14:34.556 "medium_priority_weight": 0, 00:14:34.556 "high_priority_weight": 0, 00:14:34.556 "nvme_adminq_poll_period_us": 10000, 00:14:34.556 "nvme_ioq_poll_period_us": 0, 00:14:34.556 "io_queue_requests": 0, 00:14:34.556 "delay_cmd_submit": true, 00:14:34.556 "transport_retry_count": 4, 00:14:34.556 "bdev_retry_count": 3, 00:14:34.556 "transport_ack_timeout": 0, 00:14:34.556 "ctrlr_loss_timeout_sec": 0, 00:14:34.556 "reconnect_delay_sec": 0, 00:14:34.556 "fast_io_fail_timeout_sec": 0, 00:14:34.556 "disable_auto_failback": false, 00:14:34.556 "generate_uuids": false, 00:14:34.556 "transport_tos": 0, 00:14:34.556 "nvme_error_stat": false, 00:14:34.556 "rdma_srq_size": 0, 00:14:34.556 "io_path_stat": false, 00:14:34.556 "allow_accel_sequence": false, 00:14:34.556 "rdma_max_cq_size": 0, 00:14:34.556 "rdma_cm_event_timeout_ms": 0, 00:14:34.556 "dhchap_digests": [ 00:14:34.556 "sha256", 00:14:34.556 "sha384", 00:14:34.556 "sha512" 00:14:34.556 ], 00:14:34.556 "dhchap_dhgroups": [ 00:14:34.556 "null", 00:14:34.556 "ffdhe2048", 00:14:34.556 "ffdhe3072", 00:14:34.556 "ffdhe4096", 00:14:34.556 "ffdhe6144", 00:14:34.556 "ffdhe8192" 00:14:34.556 ] 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "bdev_nvme_set_hotplug", 00:14:34.556 "params": { 00:14:34.556 "period_us": 100000, 00:14:34.556 "enable": false 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "bdev_malloc_create", 00:14:34.556 "params": { 00:14:34.556 "name": "malloc0", 00:14:34.556 "num_blocks": 8192, 00:14:34.556 "block_size": 4096, 00:14:34.556 "physical_block_size": 4096, 00:14:34.556 "uuid": "f880ca33-dd7b-4528-8eec-a1b6ca1effe1", 00:14:34.556 "optimal_io_boundary": 0 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "bdev_wait_for_examine" 00:14:34.556 } 00:14:34.556 ] 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "subsystem": "nbd", 00:14:34.556 "config": [] 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "subsystem": "scheduler", 00:14:34.556 "config": [ 00:14:34.556 { 00:14:34.556 "method": "framework_set_scheduler", 00:14:34.556 "params": { 00:14:34.556 "name": "static" 00:14:34.556 } 00:14:34.556 } 00:14:34.556 ] 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "subsystem": "nvmf", 00:14:34.556 "config": [ 00:14:34.556 { 00:14:34.556 "method": "nvmf_set_config", 00:14:34.556 "params": { 00:14:34.556 "discovery_filter": "match_any", 00:14:34.556 "admin_cmd_passthru": { 00:14:34.556 "identify_ctrlr": false 00:14:34.556 } 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "nvmf_set_max_subsystems", 00:14:34.556 "params": { 00:14:34.556 "max_subsystems": 1024 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "nvmf_set_crdt", 00:14:34.556 "params": { 00:14:34.556 "crdt1": 0, 00:14:34.556 "crdt2": 0, 00:14:34.556 "crdt3": 0 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "nvmf_create_transport", 00:14:34.556 "params": { 00:14:34.556 "trtype": "TCP", 00:14:34.556 "max_queue_depth": 128, 00:14:34.556 "max_io_qpairs_per_ctrlr": 127, 00:14:34.556 "in_capsule_data_size": 4096, 00:14:34.556 "max_io_size": 131072, 00:14:34.556 "io_unit_size": 131072, 00:14:34.556 "max_aq_depth": 128, 00:14:34.556 "num_shared_buffers": 511, 00:14:34.556 "buf_cache_size": 4294967295, 00:14:34.556 "dif_insert_or_strip": false, 00:14:34.556 "zcopy": false, 00:14:34.556 "c2h_success": false, 00:14:34.556 "sock_priority": 0, 00:14:34.556 "abort_timeout_sec": 1, 00:14:34.556 "ack_timeout": 0, 00:14:34.556 "data_wr_pool_size": 0 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "nvmf_create_subsystem", 00:14:34.556 "params": { 00:14:34.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.556 "allow_any_host": false, 00:14:34.556 "serial_number": "00000000000000000000", 00:14:34.556 "model_number": "SPDK bdev Controller", 00:14:34.556 "max_namespaces": 32, 00:14:34.556 "min_cntlid": 1, 00:14:34.556 "max_cntlid": 65519, 00:14:34.556 "ana_reporting": false 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "nvmf_subsystem_add_host", 00:14:34.556 "params": { 00:14:34.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.556 "host": "nqn.2016-06.io.spdk:host1", 00:14:34.556 "psk": "key0" 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "nvmf_subsystem_add_ns", 00:14:34.556 "params": { 00:14:34.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.556 "namespace": { 00:14:34.556 "nsid": 1, 00:14:34.556 "bdev_name": "malloc0", 00:14:34.556 "nguid": "F880CA33DD7B45288EECA1B6CA1EFFE1", 00:14:34.556 "uuid": "f880ca33-dd7b-4528-8eec-a1b6ca1effe1", 00:14:34.556 "no_auto_visible": false 00:14:34.556 } 00:14:34.556 } 00:14:34.556 }, 00:14:34.556 { 00:14:34.556 "method": "nvmf_subsystem_add_listener", 00:14:34.556 "params": { 00:14:34.556 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:34.556 "listen_address": { 00:14:34.556 "trtype": "TCP", 00:14:34.556 "adrfam": "IPv4", 00:14:34.556 "traddr": "10.0.0.2", 00:14:34.556 "trsvcid": "4420" 00:14:34.556 }, 00:14:34.556 "secure_channel": true 00:14:34.556 } 00:14:34.556 } 00:14:34.556 ] 00:14:34.556 } 00:14:34.556 ] 00:14:34.556 }' 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74154 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74154 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 74154 ']' 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:34.556 18:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.556 [2024-05-16 18:35:47.973182] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:34.556 [2024-05-16 18:35:47.973260] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.815 [2024-05-16 18:35:48.109526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.815 [2024-05-16 18:35:48.218574] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.815 [2024-05-16 18:35:48.218675] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.815 [2024-05-16 18:35:48.218687] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.815 [2024-05-16 18:35:48.218695] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.815 [2024-05-16 18:35:48.218703] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.815 [2024-05-16 18:35:48.218789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.074 [2024-05-16 18:35:48.394178] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:35.074 [2024-05-16 18:35:48.473658] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.074 [2024-05-16 18:35:48.505570] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:35.074 [2024-05-16 18:35:48.505652] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:35.074 [2024-05-16 18:35:48.505896] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74186 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74186 /var/tmp/bdevperf.sock 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 74186 ']' 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:35.641 18:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:35.641 "subsystems": [ 00:14:35.641 { 00:14:35.641 "subsystem": "keyring", 00:14:35.641 "config": [ 00:14:35.641 { 00:14:35.641 "method": "keyring_file_add_key", 00:14:35.641 "params": { 00:14:35.641 "name": "key0", 00:14:35.641 "path": "/tmp/tmp.Z1hB88Ybzi" 00:14:35.641 } 00:14:35.641 } 00:14:35.641 ] 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "subsystem": "iobuf", 00:14:35.641 "config": [ 00:14:35.641 { 00:14:35.641 "method": "iobuf_set_options", 00:14:35.641 "params": { 00:14:35.641 "small_pool_count": 8192, 00:14:35.641 "large_pool_count": 1024, 00:14:35.641 "small_bufsize": 8192, 00:14:35.641 "large_bufsize": 135168 00:14:35.641 } 00:14:35.641 } 00:14:35.641 ] 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "subsystem": "sock", 00:14:35.641 "config": [ 00:14:35.641 { 00:14:35.641 "method": "sock_set_default_impl", 00:14:35.641 "params": { 00:14:35.641 "impl_name": "uring" 00:14:35.641 } 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "method": "sock_impl_set_options", 00:14:35.641 "params": { 00:14:35.641 "impl_name": "ssl", 00:14:35.641 "recv_buf_size": 4096, 00:14:35.641 "send_buf_size": 4096, 00:14:35.641 "enable_recv_pipe": true, 00:14:35.641 "enable_quickack": false, 00:14:35.641 "enable_placement_id": 0, 00:14:35.641 "enable_zerocopy_send_server": true, 00:14:35.641 "enable_zerocopy_send_client": false, 00:14:35.641 "zerocopy_threshold": 0, 00:14:35.641 "tls_version": 0, 00:14:35.641 "enable_ktls": false 00:14:35.641 } 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "method": "sock_impl_set_options", 00:14:35.641 "params": { 00:14:35.641 "impl_name": "posix", 00:14:35.641 "recv_buf_size": 2097152, 00:14:35.641 "send_buf_size": 2097152, 00:14:35.641 "enable_recv_pipe": true, 00:14:35.641 "enable_quickack": false, 00:14:35.641 "enable_placement_id": 0, 00:14:35.641 "enable_zerocopy_send_server": true, 00:14:35.641 "enable_zerocopy_send_client": false, 00:14:35.641 "zerocopy_threshold": 0, 00:14:35.641 "tls_version": 0, 00:14:35.641 "enable_ktls": false 00:14:35.641 } 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "method": "sock_impl_set_options", 00:14:35.641 "params": { 00:14:35.641 "impl_name": "uring", 00:14:35.641 "recv_buf_size": 2097152, 00:14:35.641 "send_buf_size": 2097152, 00:14:35.641 "enable_recv_pipe": true, 00:14:35.641 "enable_quickack": false, 00:14:35.641 "enable_placement_id": 0, 00:14:35.641 "enable_zerocopy_send_server": false, 00:14:35.641 "enable_zerocopy_send_client": false, 00:14:35.641 "zerocopy_threshold": 0, 00:14:35.641 "tls_version": 0, 00:14:35.641 "enable_ktls": false 00:14:35.641 } 00:14:35.641 } 00:14:35.641 ] 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "subsystem": "vmd", 00:14:35.641 "config": [] 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "subsystem": "accel", 00:14:35.641 "config": [ 00:14:35.641 { 00:14:35.641 "method": "accel_set_options", 00:14:35.641 "params": { 00:14:35.641 "small_cache_size": 128, 00:14:35.641 "large_cache_size": 16, 00:14:35.641 "task_count": 2048, 00:14:35.641 "sequence_count": 2048, 00:14:35.641 "buf_count": 2048 00:14:35.641 } 00:14:35.641 } 00:14:35.641 ] 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "subsystem": "bdev", 00:14:35.641 "config": [ 00:14:35.641 { 00:14:35.641 "method": "bdev_set_options", 00:14:35.641 "params": { 00:14:35.641 "bdev_io_pool_size": 65535, 00:14:35.641 "bdev_io_cache_size": 256, 00:14:35.641 "bdev_auto_examine": true, 00:14:35.641 "iobuf_small_cache_size": 128, 00:14:35.641 "iobuf_large_cache_size": 16 00:14:35.641 } 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "method": "bdev_raid_set_options", 00:14:35.641 "params": { 00:14:35.641 "process_window_size_kb": 1024 00:14:35.641 } 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "method": "bdev_iscsi_set_options", 00:14:35.641 "params": { 00:14:35.641 "timeout_sec": 30 00:14:35.641 } 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "method": "bdev_nvme_set_options", 00:14:35.641 "params": { 00:14:35.641 "action_on_timeout": "none", 00:14:35.641 "timeout_us": 0, 00:14:35.641 "timeout_admin_us": 0, 00:14:35.641 "keep_alive_timeout_ms": 10000, 00:14:35.641 "arbitration_burst": 0, 00:14:35.641 "low_priority_weight": 0, 00:14:35.641 "medium_priority_weight": 0, 00:14:35.641 "high_priority_weight": 0, 00:14:35.641 "nvme_adminq_poll_period_us": 10000, 00:14:35.641 "nvme_ioq_poll_period_us": 0, 00:14:35.641 "io_queue_requests": 512, 00:14:35.641 "delay_cmd_submit": true, 00:14:35.641 "transport_retry_count": 4, 00:14:35.641 "bdev_retry_count": 3, 00:14:35.641 "transport_ack_timeout": 0, 00:14:35.641 "ctrlr_loss_timeout_sec": 0, 00:14:35.641 "reconnect_delay_sec": 0, 00:14:35.641 "fast_io_fail_timeout_sec": 0, 00:14:35.641 "disable_auto_failback": false, 00:14:35.641 "generate_uuids": false, 00:14:35.641 "transport_tos": 0, 00:14:35.641 "nvme_error_stat": false, 00:14:35.641 "rdma_srq_size": 0, 00:14:35.641 "io_path_stat": false, 00:14:35.641 "allow_accel_sequence": false, 00:14:35.641 "rdma_max_cq_size": 0, 00:14:35.641 "rdma_cm_event_timeout_ms": 0, 00:14:35.641 "dhchap_digests": [ 00:14:35.641 "sha256", 00:14:35.641 "sha384", 00:14:35.641 "sha512" 00:14:35.641 ], 00:14:35.641 "dhchap_dhgroups": [ 00:14:35.641 "null", 00:14:35.641 "ffdhe2048", 00:14:35.641 "ffdhe3072", 00:14:35.641 "ffdhe4096", 00:14:35.641 "ffdhe6144", 00:14:35.641 "ffdhe8192" 00:14:35.641 ] 00:14:35.641 } 00:14:35.641 }, 00:14:35.641 { 00:14:35.641 "method": "bdev_nvme_attach_controller", 00:14:35.641 "params": { 00:14:35.641 "name": "nvme0", 00:14:35.641 "trtype": "TCP", 00:14:35.641 "adrfam": "IPv4", 00:14:35.641 "traddr": "10.0.0.2", 00:14:35.641 "trsvcid": "4420", 00:14:35.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:35.641 "prchk_reftag": false, 00:14:35.641 "prchk_guard": false, 00:14:35.641 "ctrlr_loss_timeout_sec": 0, 00:14:35.642 "reconnect_delay_sec": 0, 00:14:35.642 "fast_io_fail_timeout_sec": 0, 00:14:35.642 "psk": "key0", 00:14:35.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:35.642 "hdgst": false, 00:14:35.642 "ddgst": false 00:14:35.642 } 00:14:35.642 }, 00:14:35.642 { 00:14:35.642 "method": "bdev_nvme_set_hotplug", 00:14:35.642 "params": { 00:14:35.642 "period_us": 100000, 00:14:35.642 "enable": false 00:14:35.642 } 00:14:35.642 }, 00:14:35.642 { 00:14:35.642 "method": "bdev_enable_histogram", 00:14:35.642 "params": { 00:14:35.642 "name": "nvme0n1", 00:14:35.642 "enable": true 00:14:35.642 } 00:14:35.642 }, 00:14:35.642 { 00:14:35.642 "method": "bdev_wait_for_examine" 00:14:35.642 } 00:14:35.642 ] 00:14:35.642 }, 00:14:35.642 { 00:14:35.642 "subsystem": "nbd", 00:14:35.642 "config": [] 00:14:35.642 } 00:14:35.642 ] 00:14:35.642 }' 00:14:35.642 [2024-05-16 18:35:49.032198] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:35.642 [2024-05-16 18:35:49.032601] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74186 ] 00:14:35.899 [2024-05-16 18:35:49.170917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.899 [2024-05-16 18:35:49.272322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.158 [2024-05-16 18:35:49.411531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:36.158 [2024-05-16 18:35:49.461056] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:36.732 18:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:36.732 18:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:36.732 18:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:36.732 18:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:36.990 18:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.990 18:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:36.990 Running I/O for 1 seconds... 00:14:37.924 00:14:37.924 Latency(us) 00:14:37.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.924 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:37.924 Verification LBA range: start 0x0 length 0x2000 00:14:37.924 nvme0n1 : 1.02 3471.03 13.56 0.00 0.00 36364.25 7000.44 44326.17 00:14:37.924 =================================================================================================================== 00:14:37.924 Total : 3471.03 13.56 0.00 0.00 36364.25 7000.44 44326.17 00:14:37.924 0 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:37.924 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:38.182 nvmf_trace.0 00:14:38.182 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:14:38.182 18:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74186 00:14:38.182 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 74186 ']' 00:14:38.182 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 74186 00:14:38.182 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:38.182 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:38.182 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74186 00:14:38.182 killing process with pid 74186 00:14:38.183 Received shutdown signal, test time was about 1.000000 seconds 00:14:38.183 00:14:38.183 Latency(us) 00:14:38.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.183 =================================================================================================================== 00:14:38.183 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.183 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:38.183 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:38.183 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74186' 00:14:38.183 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 74186 00:14:38.183 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 74186 00:14:38.441 18:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:38.441 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:38.441 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:38.441 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:38.441 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:38.441 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:38.441 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:38.441 rmmod nvme_tcp 00:14:38.441 rmmod nvme_fabrics 00:14:38.441 rmmod nvme_keyring 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74154 ']' 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74154 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 74154 ']' 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 74154 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74154 00:14:38.699 killing process with pid 74154 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74154' 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 74154 00:14:38.699 [2024-05-16 18:35:51.978565] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:38.699 18:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 74154 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.aiTeTGQLfS /tmp/tmp.itycBfiSzK /tmp/tmp.Z1hB88Ybzi 00:14:38.958 00:14:38.958 real 1m30.228s 00:14:38.958 user 2m23.061s 00:14:38.958 sys 0m29.433s 00:14:38.958 ************************************ 00:14:38.958 END TEST nvmf_tls 00:14:38.958 ************************************ 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:38.958 18:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.958 18:35:52 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:38.958 18:35:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:38.958 18:35:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:38.958 18:35:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.958 ************************************ 00:14:38.958 START TEST nvmf_fips 00:14:38.958 ************************************ 00:14:38.958 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:39.218 * Looking for test storage... 00:14:39.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:39.218 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:39.219 Error setting digest 00:14:39.219 00F2B0C4EA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:39.219 00F2B0C4EA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:39.219 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:39.478 Cannot find device "nvmf_tgt_br" 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:39.478 Cannot find device "nvmf_tgt_br2" 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:39.478 Cannot find device "nvmf_tgt_br" 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:39.478 Cannot find device "nvmf_tgt_br2" 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:39.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:39.478 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:39.478 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:39.737 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:39.737 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:39.737 18:35:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:39.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:39.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:14:39.737 00:14:39.737 --- 10.0.0.2 ping statistics --- 00:14:39.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.737 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:39.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:39.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:39.737 00:14:39.737 --- 10.0.0.3 ping statistics --- 00:14:39.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.737 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:39.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:39.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:39.737 00:14:39.737 --- 10.0.0.1 ping statistics --- 00:14:39.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:39.737 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74450 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74450 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 74450 ']' 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:39.737 18:35:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:39.737 [2024-05-16 18:35:53.190411] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:39.737 [2024-05-16 18:35:53.190520] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:39.996 [2024-05-16 18:35:53.329171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.996 [2024-05-16 18:35:53.481806] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:39.996 [2024-05-16 18:35:53.481906] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:39.996 [2024-05-16 18:35:53.481918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:39.996 [2024-05-16 18:35:53.481933] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:39.996 [2024-05-16 18:35:53.481941] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:39.996 [2024-05-16 18:35:53.481969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.254 [2024-05-16 18:35:53.556897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:40.819 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:40.819 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:40.820 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:41.077 [2024-05-16 18:35:54.462236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.077 [2024-05-16 18:35:54.478158] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:41.077 [2024-05-16 18:35:54.478291] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.078 [2024-05-16 18:35:54.478529] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.078 [2024-05-16 18:35:54.513660] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:41.078 malloc0 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74489 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74489 /var/tmp/bdevperf.sock 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 74489 ']' 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:41.078 18:35:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:41.336 [2024-05-16 18:35:54.646710] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:41.336 [2024-05-16 18:35:54.646867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74489 ] 00:14:41.336 [2024-05-16 18:35:54.789107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.595 [2024-05-16 18:35:54.954377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.595 [2024-05-16 18:35:55.027934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:42.163 18:35:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.163 18:35:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:14:42.163 18:35:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:42.730 [2024-05-16 18:35:55.962602] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.730 [2024-05-16 18:35:55.962741] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:42.730 TLSTESTn1 00:14:42.730 18:35:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:42.730 Running I/O for 10 seconds... 00:14:52.696 00:14:52.696 Latency(us) 00:14:52.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.696 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:52.696 Verification LBA range: start 0x0 length 0x2000 00:14:52.696 TLSTESTn1 : 10.02 2986.40 11.67 0.00 0.00 42791.37 4736.47 34078.72 00:14:52.696 =================================================================================================================== 00:14:52.696 Total : 2986.40 11.67 0.00 0.00 42791.37 4736.47 34078.72 00:14:52.696 0 00:14:52.696 18:36:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:52.696 18:36:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:52.954 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:52.955 nvmf_trace.0 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74489 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 74489 ']' 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 74489 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74489 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:52.955 killing process with pid 74489 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74489' 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 74489 00:14:52.955 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.955 00:14:52.955 Latency(us) 00:14:52.955 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.955 =================================================================================================================== 00:14:52.955 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:52.955 [2024-05-16 18:36:06.335864] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:52.955 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 74489 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.213 rmmod nvme_tcp 00:14:53.213 rmmod nvme_fabrics 00:14:53.213 rmmod nvme_keyring 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74450 ']' 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74450 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 74450 ']' 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 74450 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74450 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74450' 00:14:53.213 killing process with pid 74450 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 74450 00:14:53.213 [2024-05-16 18:36:06.681675] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:53.213 [2024-05-16 18:36:06.681732] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:53.213 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 74450 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:53.472 18:36:06 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:53.472 00:14:53.472 real 0m14.542s 00:14:53.472 user 0m19.183s 00:14:53.472 sys 0m6.469s 00:14:53.473 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:53.473 ************************************ 00:14:53.473 18:36:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:53.473 END TEST nvmf_fips 00:14:53.473 ************************************ 00:14:53.732 18:36:06 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:14:53.732 18:36:06 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:14:53.732 18:36:06 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:14:53.732 18:36:06 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.732 18:36:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.732 18:36:07 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:14:53.732 18:36:07 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:53.732 18:36:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.732 18:36:07 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 1 -eq 0 ]] 00:14:53.732 18:36:07 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:53.732 18:36:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:53.732 18:36:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:53.732 18:36:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:53.732 ************************************ 00:14:53.732 START TEST nvmf_identify 00:14:53.732 ************************************ 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:53.732 * Looking for test storage... 00:14:53.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:53.732 Cannot find device "nvmf_tgt_br" 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.732 Cannot find device "nvmf_tgt_br2" 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:53.732 Cannot find device "nvmf_tgt_br" 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:53.732 Cannot find device "nvmf_tgt_br2" 00:14:53.732 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:14:53.733 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:54.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:14:54.024 00:14:54.024 --- 10.0.0.2 ping statistics --- 00:14:54.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.024 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:54.024 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.024 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:14:54.024 00:14:54.024 --- 10.0.0.3 ping statistics --- 00:14:54.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.024 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:14:54.024 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:14:54.304 00:14:54.304 --- 10.0.0.1 ping statistics --- 00:14:54.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.304 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:54.304 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=74836 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 74836 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 74836 ']' 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.305 18:36:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:54.305 [2024-05-16 18:36:07.588294] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:54.305 [2024-05-16 18:36:07.588387] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.305 [2024-05-16 18:36:07.731967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.563 [2024-05-16 18:36:07.855639] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.563 [2024-05-16 18:36:07.855705] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.563 [2024-05-16 18:36:07.855720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.563 [2024-05-16 18:36:07.855730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.563 [2024-05-16 18:36:07.855739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.563 [2024-05-16 18:36:07.855925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.563 [2024-05-16 18:36:07.856083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.563 [2024-05-16 18:36:07.856777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.563 [2024-05-16 18:36:07.856855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.563 [2024-05-16 18:36:07.919032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 [2024-05-16 18:36:08.660423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 Malloc0 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 [2024-05-16 18:36:08.774718] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:55.499 [2024-05-16 18:36:08.775046] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:55.499 [ 00:14:55.499 { 00:14:55.499 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:55.499 "subtype": "Discovery", 00:14:55.499 "listen_addresses": [ 00:14:55.499 { 00:14:55.499 "trtype": "TCP", 00:14:55.499 "adrfam": "IPv4", 00:14:55.499 "traddr": "10.0.0.2", 00:14:55.499 "trsvcid": "4420" 00:14:55.499 } 00:14:55.499 ], 00:14:55.499 "allow_any_host": true, 00:14:55.499 "hosts": [] 00:14:55.499 }, 00:14:55.499 { 00:14:55.499 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.499 "subtype": "NVMe", 00:14:55.499 "listen_addresses": [ 00:14:55.499 { 00:14:55.499 "trtype": "TCP", 00:14:55.499 "adrfam": "IPv4", 00:14:55.499 "traddr": "10.0.0.2", 00:14:55.499 "trsvcid": "4420" 00:14:55.499 } 00:14:55.499 ], 00:14:55.499 "allow_any_host": true, 00:14:55.499 "hosts": [], 00:14:55.499 "serial_number": "SPDK00000000000001", 00:14:55.499 "model_number": "SPDK bdev Controller", 00:14:55.499 "max_namespaces": 32, 00:14:55.499 "min_cntlid": 1, 00:14:55.499 "max_cntlid": 65519, 00:14:55.499 "namespaces": [ 00:14:55.499 { 00:14:55.499 "nsid": 1, 00:14:55.499 "bdev_name": "Malloc0", 00:14:55.499 "name": "Malloc0", 00:14:55.499 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:55.499 "eui64": "ABCDEF0123456789", 00:14:55.499 "uuid": "3c7250fe-8e19-4875-a9b0-29b1e819f3ee" 00:14:55.499 } 00:14:55.499 ] 00:14:55.499 } 00:14:55.499 ] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.499 18:36:08 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:55.499 [2024-05-16 18:36:08.827950] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:55.499 [2024-05-16 18:36:08.828010] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74871 ] 00:14:55.499 [2024-05-16 18:36:08.964089] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:55.499 [2024-05-16 18:36:08.964158] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:55.499 [2024-05-16 18:36:08.964166] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:55.499 [2024-05-16 18:36:08.964178] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:55.499 [2024-05-16 18:36:08.964190] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:55.499 [2024-05-16 18:36:08.964375] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:55.499 [2024-05-16 18:36:08.964440] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4f1a60 0 00:14:55.499 [2024-05-16 18:36:08.975898] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:55.499 [2024-05-16 18:36:08.975934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:55.499 [2024-05-16 18:36:08.975960] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:55.499 [2024-05-16 18:36:08.975964] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:55.499 [2024-05-16 18:36:08.976034] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.976041] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.976045] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.499 [2024-05-16 18:36:08.976061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:55.499 [2024-05-16 18:36:08.976090] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.499 [2024-05-16 18:36:08.982996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.499 [2024-05-16 18:36:08.983014] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.499 [2024-05-16 18:36:08.983019] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.983024] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.499 [2024-05-16 18:36:08.983039] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:55.499 [2024-05-16 18:36:08.983048] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:55.499 [2024-05-16 18:36:08.983054] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:55.499 [2024-05-16 18:36:08.983081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.983087] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.983091] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.499 [2024-05-16 18:36:08.983100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.499 [2024-05-16 18:36:08.983126] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.499 [2024-05-16 18:36:08.983243] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.499 [2024-05-16 18:36:08.983252] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.499 [2024-05-16 18:36:08.983256] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.983260] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.499 [2024-05-16 18:36:08.983266] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:55.499 [2024-05-16 18:36:08.983275] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:55.499 [2024-05-16 18:36:08.983283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.983288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.499 [2024-05-16 18:36:08.983292] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.499 [2024-05-16 18:36:08.983300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.983319] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.983380] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.983387] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.983391] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.983401] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:55.500 [2024-05-16 18:36:08.983411] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:55.500 [2024-05-16 18:36:08.983419] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983423] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983427] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.983435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.983452] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.983515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.983529] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.983534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983538] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.983545] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:55.500 [2024-05-16 18:36:08.983556] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983561] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983565] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.983573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.983593] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.983654] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.983661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.983665] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983684] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.983689] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:55.500 [2024-05-16 18:36:08.983695] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:55.500 [2024-05-16 18:36:08.983703] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:55.500 [2024-05-16 18:36:08.983809] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:55.500 [2024-05-16 18:36:08.983846] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:55.500 [2024-05-16 18:36:08.983858] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.983874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.983894] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.983977] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.983984] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.983989] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.983993] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.983999] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:55.500 [2024-05-16 18:36:08.984009] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984014] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984018] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.984042] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.984103] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.984109] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.984113] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984117] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.984122] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:55.500 [2024-05-16 18:36:08.984128] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:55.500 [2024-05-16 18:36:08.984136] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:55.500 [2024-05-16 18:36:08.984151] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:55.500 [2024-05-16 18:36:08.984163] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984183] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.984208] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.984337] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.500 [2024-05-16 18:36:08.984349] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.500 [2024-05-16 18:36:08.984354] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984358] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f1a60): datao=0, datal=4096, cccid=0 00:14:55.500 [2024-05-16 18:36:08.984363] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5347f0) on tqpair(0x4f1a60): expected_datao=0, payload_size=4096 00:14:55.500 [2024-05-16 18:36:08.984369] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984376] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984381] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984390] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.984396] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.984399] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.984413] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:55.500 [2024-05-16 18:36:08.984418] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:55.500 [2024-05-16 18:36:08.984424] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:55.500 [2024-05-16 18:36:08.984429] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:55.500 [2024-05-16 18:36:08.984434] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:55.500 [2024-05-16 18:36:08.984439] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:55.500 [2024-05-16 18:36:08.984469] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:55.500 [2024-05-16 18:36:08.984495] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984500] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984504] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:55.500 [2024-05-16 18:36:08.984533] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.984623] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.984630] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.984634] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984638] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5347f0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.984647] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984671] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.500 [2024-05-16 18:36:08.984685] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984689] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984692] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.500 [2024-05-16 18:36:08.984705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984713] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.500 [2024-05-16 18:36:08.984725] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984729] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984733] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.500 [2024-05-16 18:36:08.984744] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:55.500 [2024-05-16 18:36:08.984757] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:55.500 [2024-05-16 18:36:08.984764] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984768] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.984775] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.984795] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5347f0, cid 0, qid 0 00:14:55.500 [2024-05-16 18:36:08.984802] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534950, cid 1, qid 0 00:14:55.500 [2024-05-16 18:36:08.984807] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534ab0, cid 2, qid 0 00:14:55.500 [2024-05-16 18:36:08.984812] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534c10, cid 3, qid 0 00:14:55.500 [2024-05-16 18:36:08.984817] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534d70, cid 4, qid 0 00:14:55.500 [2024-05-16 18:36:08.984969] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.984978] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.984982] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.984986] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534d70) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.984992] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:55.500 [2024-05-16 18:36:08.984998] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:55.500 [2024-05-16 18:36:08.985011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985016] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.985024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.985044] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534d70, cid 4, qid 0 00:14:55.500 [2024-05-16 18:36:08.985131] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.500 [2024-05-16 18:36:08.985138] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.500 [2024-05-16 18:36:08.985142] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985145] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f1a60): datao=0, datal=4096, cccid=4 00:14:55.500 [2024-05-16 18:36:08.985150] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x534d70) on tqpair(0x4f1a60): expected_datao=0, payload_size=4096 00:14:55.500 [2024-05-16 18:36:08.985155] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985162] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985167] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985175] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.985181] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.985185] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985189] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534d70) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.985218] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:55.500 [2024-05-16 18:36:08.985261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985267] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.985274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.500 [2024-05-16 18:36:08.985297] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985302] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985305] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4f1a60) 00:14:55.500 [2024-05-16 18:36:08.985327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.500 [2024-05-16 18:36:08.985351] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534d70, cid 4, qid 0 00:14:55.500 [2024-05-16 18:36:08.985359] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534ed0, cid 5, qid 0 00:14:55.500 [2024-05-16 18:36:08.985501] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.500 [2024-05-16 18:36:08.985516] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.500 [2024-05-16 18:36:08.985521] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985525] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f1a60): datao=0, datal=1024, cccid=4 00:14:55.500 [2024-05-16 18:36:08.985530] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x534d70) on tqpair(0x4f1a60): expected_datao=0, payload_size=1024 00:14:55.500 [2024-05-16 18:36:08.985535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985544] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985548] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985554] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.985559] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.985564] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985568] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534ed0) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.985586] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.500 [2024-05-16 18:36:08.985593] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.500 [2024-05-16 18:36:08.985597] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.500 [2024-05-16 18:36:08.985601] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534d70) on tqpair=0x4f1a60 00:14:55.500 [2024-05-16 18:36:08.985630] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.985635] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f1a60) 00:14:55.501 [2024-05-16 18:36:08.985643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.501 [2024-05-16 18:36:08.985682] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534d70, cid 4, qid 0 00:14:55.501 [2024-05-16 18:36:08.985761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.501 [2024-05-16 18:36:08.985769] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.501 [2024-05-16 18:36:08.985788] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.985792] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f1a60): datao=0, datal=3072, cccid=4 00:14:55.501 [2024-05-16 18:36:08.985797] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x534d70) on tqpair(0x4f1a60): expected_datao=0, payload_size=3072 00:14:55.501 [2024-05-16 18:36:08.985802] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.985809] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.985813] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.985821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.501 [2024-05-16 18:36:08.985827] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.501 [2024-05-16 18:36:08.985830] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.985845] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534d70) on tqpair=0x4f1a60 00:14:55.501 [2024-05-16 18:36:08.985857] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.985862] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4f1a60) 00:14:55.501 [2024-05-16 18:36:08.985869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.501 [2024-05-16 18:36:08.985910] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534d70, cid 4, qid 0 00:14:55.501 [2024-05-16 18:36:08.985996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.501 [2024-05-16 18:36:08.986004] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.501 [2024-05-16 18:36:08.986008] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986012] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4f1a60): datao=0, datal=8, cccid=4 00:14:55.501 [2024-05-16 18:36:08.986017] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x534d70) on tqpair(0x4f1a60): expected_datao=0, payload_size=8 00:14:55.501 [2024-05-16 18:36:08.986022] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986031] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986035] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986050] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.501 [2024-05-16 18:36:08.986057] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.501 ===================================================== 00:14:55.501 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:55.501 ===================================================== 00:14:55.501 Controller Capabilities/Features 00:14:55.501 ================================ 00:14:55.501 Vendor ID: 0000 00:14:55.501 Subsystem Vendor ID: 0000 00:14:55.501 Serial Number: .................... 00:14:55.501 Model Number: ........................................ 00:14:55.501 Firmware Version: 24.09 00:14:55.501 Recommended Arb Burst: 0 00:14:55.501 IEEE OUI Identifier: 00 00 00 00:14:55.501 Multi-path I/O 00:14:55.501 May have multiple subsystem ports: No 00:14:55.501 May have multiple controllers: No 00:14:55.501 Associated with SR-IOV VF: No 00:14:55.501 Max Data Transfer Size: 131072 00:14:55.501 Max Number of Namespaces: 0 00:14:55.501 Max Number of I/O Queues: 1024 00:14:55.501 NVMe Specification Version (VS): 1.3 00:14:55.501 NVMe Specification Version (Identify): 1.3 00:14:55.501 Maximum Queue Entries: 128 00:14:55.501 Contiguous Queues Required: Yes 00:14:55.501 Arbitration Mechanisms Supported 00:14:55.501 Weighted Round Robin: Not Supported 00:14:55.501 Vendor Specific: Not Supported 00:14:55.501 Reset Timeout: 15000 ms 00:14:55.501 Doorbell Stride: 4 bytes 00:14:55.501 NVM Subsystem Reset: Not Supported 00:14:55.501 Command Sets Supported 00:14:55.501 NVM Command Set: Supported 00:14:55.501 Boot Partition: Not Supported 00:14:55.501 Memory Page Size Minimum: 4096 bytes 00:14:55.501 Memory Page Size Maximum: 4096 bytes 00:14:55.501 Persistent Memory Region: Not Supported 00:14:55.501 Optional Asynchronous Events Supported 00:14:55.501 Namespace Attribute Notices: Not Supported 00:14:55.501 Firmware Activation Notices: Not Supported 00:14:55.501 ANA Change Notices: Not Supported 00:14:55.501 PLE Aggregate Log Change Notices: Not Supported 00:14:55.501 LBA Status Info Alert Notices: Not Supported 00:14:55.501 EGE Aggregate Log Change Notices: Not Supported 00:14:55.501 Normal NVM Subsystem Shutdown event: Not Supported 00:14:55.501 Zone Descriptor Change Notices: Not Supported 00:14:55.501 Discovery Log Change Notices: Supported 00:14:55.501 Controller Attributes 00:14:55.501 128-bit Host Identifier: Not Supported 00:14:55.501 Non-Operational Permissive Mode: Not Supported 00:14:55.501 NVM Sets: Not Supported 00:14:55.501 Read Recovery Levels: Not Supported 00:14:55.501 Endurance Groups: Not Supported 00:14:55.501 Predictable Latency Mode: Not Supported 00:14:55.501 Traffic Based Keep ALive: Not Supported 00:14:55.501 Namespace Granularity: Not Supported 00:14:55.501 SQ Associations: Not Supported 00:14:55.501 UUID List: Not Supported 00:14:55.501 Multi-Domain Subsystem: Not Supported 00:14:55.501 Fixed Capacity Management: Not Supported 00:14:55.501 Variable Capacity Management: Not Supported 00:14:55.501 Delete Endurance Group: Not Supported 00:14:55.501 Delete NVM Set: Not Supported 00:14:55.501 Extended LBA Formats Supported: Not Supported 00:14:55.501 Flexible Data Placement Supported: Not Supported 00:14:55.501 00:14:55.501 Controller Memory Buffer Support 00:14:55.501 ================================ 00:14:55.501 Supported: No 00:14:55.501 00:14:55.501 Persistent Memory Region Support 00:14:55.501 ================================ 00:14:55.501 Supported: No 00:14:55.501 00:14:55.501 Admin Command Set Attributes 00:14:55.501 ============================ 00:14:55.501 Security Send/Receive: Not Supported 00:14:55.501 Format NVM: Not Supported 00:14:55.501 Firmware Activate/Download: Not Supported 00:14:55.501 Namespace Management: Not Supported 00:14:55.501 Device Self-Test: Not Supported 00:14:55.501 Directives: Not Supported 00:14:55.501 NVMe-MI: Not Supported 00:14:55.501 Virtualization Management: Not Supported 00:14:55.501 Doorbell Buffer Config: Not Supported 00:14:55.501 Get LBA Status Capability: Not Supported 00:14:55.501 Command & Feature Lockdown Capability: Not Supported 00:14:55.501 Abort Command Limit: 1 00:14:55.501 Async Event Request Limit: 4 00:14:55.501 Number of Firmware Slots: N/A 00:14:55.501 Firmware Slot 1 Read-Only: N/A 00:14:55.501 Firm[2024-05-16 18:36:08.986062] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986066] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534d70) on tqpair=0x4f1a60 00:14:55.501 ware Activation Without Reset: N/A 00:14:55.501 Multiple Update Detection Support: N/A 00:14:55.501 Firmware Update Granularity: No Information Provided 00:14:55.501 Per-Namespace SMART Log: No 00:14:55.501 Asymmetric Namespace Access Log Page: Not Supported 00:14:55.501 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:55.501 Command Effects Log Page: Not Supported 00:14:55.501 Get Log Page Extended Data: Supported 00:14:55.501 Telemetry Log Pages: Not Supported 00:14:55.501 Persistent Event Log Pages: Not Supported 00:14:55.501 Supported Log Pages Log Page: May Support 00:14:55.501 Commands Supported & Effects Log Page: Not Supported 00:14:55.501 Feature Identifiers & Effects Log Page:May Support 00:14:55.501 NVMe-MI Commands & Effects Log Page: May Support 00:14:55.501 Data Area 4 for Telemetry Log: Not Supported 00:14:55.501 Error Log Page Entries Supported: 128 00:14:55.501 Keep Alive: Not Supported 00:14:55.501 00:14:55.501 NVM Command Set Attributes 00:14:55.501 ========================== 00:14:55.501 Submission Queue Entry Size 00:14:55.501 Max: 1 00:14:55.501 Min: 1 00:14:55.501 Completion Queue Entry Size 00:14:55.501 Max: 1 00:14:55.501 Min: 1 00:14:55.501 Number of Namespaces: 0 00:14:55.501 Compare Command: Not Supported 00:14:55.501 Write Uncorrectable Command: Not Supported 00:14:55.501 Dataset Management Command: Not Supported 00:14:55.501 Write Zeroes Command: Not Supported 00:14:55.501 Set Features Save Field: Not Supported 00:14:55.501 Reservations: Not Supported 00:14:55.501 Timestamp: Not Supported 00:14:55.501 Copy: Not Supported 00:14:55.501 Volatile Write Cache: Not Present 00:14:55.501 Atomic Write Unit (Normal): 1 00:14:55.501 Atomic Write Unit (PFail): 1 00:14:55.501 Atomic Compare & Write Unit: 1 00:14:55.501 Fused Compare & Write: Supported 00:14:55.501 Scatter-Gather List 00:14:55.501 SGL Command Set: Supported 00:14:55.501 SGL Keyed: Supported 00:14:55.501 SGL Bit Bucket Descriptor: Not Supported 00:14:55.501 SGL Metadata Pointer: Not Supported 00:14:55.501 Oversized SGL: Not Supported 00:14:55.501 SGL Metadata Address: Not Supported 00:14:55.501 SGL Offset: Supported 00:14:55.501 Transport SGL Data Block: Not Supported 00:14:55.501 Replay Protected Memory Block: Not Supported 00:14:55.501 00:14:55.501 Firmware Slot Information 00:14:55.501 ========================= 00:14:55.501 Active slot: 0 00:14:55.501 00:14:55.501 00:14:55.501 Error Log 00:14:55.501 ========= 00:14:55.501 00:14:55.501 Active Namespaces 00:14:55.501 ================= 00:14:55.501 Discovery Log Page 00:14:55.501 ================== 00:14:55.501 Generation Counter: 2 00:14:55.501 Number of Records: 2 00:14:55.501 Record Format: 0 00:14:55.501 00:14:55.501 Discovery Log Entry 0 00:14:55.501 ---------------------- 00:14:55.501 Transport Type: 3 (TCP) 00:14:55.501 Address Family: 1 (IPv4) 00:14:55.501 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:55.501 Entry Flags: 00:14:55.501 Duplicate Returned Information: 1 00:14:55.501 Explicit Persistent Connection Support for Discovery: 1 00:14:55.501 Transport Requirements: 00:14:55.501 Secure Channel: Not Required 00:14:55.501 Port ID: 0 (0x0000) 00:14:55.501 Controller ID: 65535 (0xffff) 00:14:55.501 Admin Max SQ Size: 128 00:14:55.501 Transport Service Identifier: 4420 00:14:55.501 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:55.501 Transport Address: 10.0.0.2 00:14:55.501 Discovery Log Entry 1 00:14:55.501 ---------------------- 00:14:55.501 Transport Type: 3 (TCP) 00:14:55.501 Address Family: 1 (IPv4) 00:14:55.501 Subsystem Type: 2 (NVM Subsystem) 00:14:55.501 Entry Flags: 00:14:55.501 Duplicate Returned Information: 0 00:14:55.501 Explicit Persistent Connection Support for Discovery: 0 00:14:55.501 Transport Requirements: 00:14:55.501 Secure Channel: Not Required 00:14:55.501 Port ID: 0 (0x0000) 00:14:55.501 Controller ID: 65535 (0xffff) 00:14:55.501 Admin Max SQ Size: 128 00:14:55.501 Transport Service Identifier: 4420 00:14:55.501 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:55.501 Transport Address: 10.0.0.2 [2024-05-16 18:36:08.986179] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:55.501 [2024-05-16 18:36:08.986196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.501 [2024-05-16 18:36:08.986204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.501 [2024-05-16 18:36:08.986210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.501 [2024-05-16 18:36:08.986216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.501 [2024-05-16 18:36:08.986226] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986230] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986234] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f1a60) 00:14:55.501 [2024-05-16 18:36:08.986264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.501 [2024-05-16 18:36:08.986302] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534c10, cid 3, qid 0 00:14:55.501 [2024-05-16 18:36:08.986371] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.501 [2024-05-16 18:36:08.986378] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.501 [2024-05-16 18:36:08.986382] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534c10) on tqpair=0x4f1a60 00:14:55.501 [2024-05-16 18:36:08.986395] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986399] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986403] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f1a60) 00:14:55.501 [2024-05-16 18:36:08.986410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.501 [2024-05-16 18:36:08.986430] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534c10, cid 3, qid 0 00:14:55.501 [2024-05-16 18:36:08.986531] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.501 [2024-05-16 18:36:08.986544] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.501 [2024-05-16 18:36:08.986549] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986554] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534c10) on tqpair=0x4f1a60 00:14:55.501 [2024-05-16 18:36:08.986560] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:55.501 [2024-05-16 18:36:08.986565] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:55.501 [2024-05-16 18:36:08.986576] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986581] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986584] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f1a60) 00:14:55.501 [2024-05-16 18:36:08.986592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.501 [2024-05-16 18:36:08.986609] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534c10, cid 3, qid 0 00:14:55.501 [2024-05-16 18:36:08.986698] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.501 [2024-05-16 18:36:08.986705] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.501 [2024-05-16 18:36:08.986709] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986713] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534c10) on tqpair=0x4f1a60 00:14:55.501 [2024-05-16 18:36:08.986724] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.501 [2024-05-16 18:36:08.986729] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.502 [2024-05-16 18:36:08.986732] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f1a60) 00:14:55.502 [2024-05-16 18:36:08.986740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.502 [2024-05-16 18:36:08.986756] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534c10, cid 3, qid 0 00:14:55.502 [2024-05-16 18:36:08.986809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.502 [2024-05-16 18:36:08.986816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.502 [2024-05-16 18:36:08.986819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.502 [2024-05-16 18:36:08.986823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534c10) on tqpair=0x4f1a60 00:14:55.502 [2024-05-16 18:36:08.986834] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.502 [2024-05-16 18:36:08.986838] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.502 [2024-05-16 18:36:08.986842] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4f1a60) 00:14:55.502 [2024-05-16 18:36:08.989926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.502 [2024-05-16 18:36:08.989965] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x534c10, cid 3, qid 0 00:14:55.502 [2024-05-16 18:36:08.990041] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.502 [2024-05-16 18:36:08.990049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.502 [2024-05-16 18:36:08.990053] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.502 [2024-05-16 18:36:08.990057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x534c10) on tqpair=0x4f1a60 00:14:55.502 [2024-05-16 18:36:08.990067] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 3 milliseconds 00:14:55.765 00:14:55.765 18:36:09 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:55.765 [2024-05-16 18:36:09.033479] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:55.765 [2024-05-16 18:36:09.033530] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74880 ] 00:14:55.765 [2024-05-16 18:36:09.170815] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:55.765 [2024-05-16 18:36:09.177953] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:55.765 [2024-05-16 18:36:09.177964] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:55.765 [2024-05-16 18:36:09.177976] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:55.765 [2024-05-16 18:36:09.177986] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:55.765 [2024-05-16 18:36:09.178131] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:55.765 [2024-05-16 18:36:09.178177] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xce9a60 0 00:14:55.765 [2024-05-16 18:36:09.184936] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:55.765 [2024-05-16 18:36:09.184958] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:55.765 [2024-05-16 18:36:09.184968] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:55.765 [2024-05-16 18:36:09.184972] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:55.765 [2024-05-16 18:36:09.185029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.185036] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.185040] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.765 [2024-05-16 18:36:09.185054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:55.765 [2024-05-16 18:36:09.185083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.765 [2024-05-16 18:36:09.192901] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.765 [2024-05-16 18:36:09.192918] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.765 [2024-05-16 18:36:09.192923] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.192928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.765 [2024-05-16 18:36:09.192940] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:55.765 [2024-05-16 18:36:09.192949] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:55.765 [2024-05-16 18:36:09.192955] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:55.765 [2024-05-16 18:36:09.192971] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.192976] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.192980] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.765 [2024-05-16 18:36:09.192989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.765 [2024-05-16 18:36:09.193015] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.765 [2024-05-16 18:36:09.193120] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.765 [2024-05-16 18:36:09.193127] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.765 [2024-05-16 18:36:09.193131] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193135] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.765 [2024-05-16 18:36:09.193141] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:55.765 [2024-05-16 18:36:09.193148] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:55.765 [2024-05-16 18:36:09.193156] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193160] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.765 [2024-05-16 18:36:09.193171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.765 [2024-05-16 18:36:09.193205] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.765 [2024-05-16 18:36:09.193256] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.765 [2024-05-16 18:36:09.193262] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.765 [2024-05-16 18:36:09.193265] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193269] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.765 [2024-05-16 18:36:09.193275] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:55.765 [2024-05-16 18:36:09.193299] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:55.765 [2024-05-16 18:36:09.193306] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193311] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193315] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.765 [2024-05-16 18:36:09.193321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.765 [2024-05-16 18:36:09.193339] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.765 [2024-05-16 18:36:09.193402] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.765 [2024-05-16 18:36:09.193413] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.765 [2024-05-16 18:36:09.193418] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193422] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.765 [2024-05-16 18:36:09.193428] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:55.765 [2024-05-16 18:36:09.193438] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193442] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.765 [2024-05-16 18:36:09.193453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.765 [2024-05-16 18:36:09.193471] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.765 [2024-05-16 18:36:09.193542] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.765 [2024-05-16 18:36:09.193548] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.765 [2024-05-16 18:36:09.193552] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193556] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.765 [2024-05-16 18:36:09.193562] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:55.765 [2024-05-16 18:36:09.193567] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:55.765 [2024-05-16 18:36:09.193575] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:55.765 [2024-05-16 18:36:09.193681] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:55.765 [2024-05-16 18:36:09.193699] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:55.765 [2024-05-16 18:36:09.193709] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193714] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.765 [2024-05-16 18:36:09.193724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.765 [2024-05-16 18:36:09.193743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.765 [2024-05-16 18:36:09.193798] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.765 [2024-05-16 18:36:09.193805] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.765 [2024-05-16 18:36:09.193808] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193812] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.765 [2024-05-16 18:36:09.193818] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:55.765 [2024-05-16 18:36:09.193856] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193861] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193866] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.765 [2024-05-16 18:36:09.193873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.765 [2024-05-16 18:36:09.193892] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.765 [2024-05-16 18:36:09.193949] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.765 [2024-05-16 18:36:09.193956] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.765 [2024-05-16 18:36:09.193960] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.765 [2024-05-16 18:36:09.193964] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.765 [2024-05-16 18:36:09.193969] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:55.765 [2024-05-16 18:36:09.193974] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.193982] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:55.766 [2024-05-16 18:36:09.194000] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194011] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194016] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.766 [2024-05-16 18:36:09.194044] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.766 [2024-05-16 18:36:09.194183] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.766 [2024-05-16 18:36:09.194190] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.766 [2024-05-16 18:36:09.194196] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194199] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=4096, cccid=0 00:14:55.766 [2024-05-16 18:36:09.194204] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2c7f0) on tqpair(0xce9a60): expected_datao=0, payload_size=4096 00:14:55.766 [2024-05-16 18:36:09.194209] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194217] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194221] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.766 [2024-05-16 18:36:09.194235] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.766 [2024-05-16 18:36:09.194239] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194243] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.766 [2024-05-16 18:36:09.194266] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:55.766 [2024-05-16 18:36:09.194271] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:55.766 [2024-05-16 18:36:09.194276] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:55.766 [2024-05-16 18:36:09.194280] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:55.766 [2024-05-16 18:36:09.194300] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:55.766 [2024-05-16 18:36:09.194305] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194319] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194330] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194338] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:55.766 [2024-05-16 18:36:09.194366] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.766 [2024-05-16 18:36:09.194432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.766 [2024-05-16 18:36:09.194439] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.766 [2024-05-16 18:36:09.194443] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194447] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2c7f0) on tqpair=0xce9a60 00:14:55.766 [2024-05-16 18:36:09.194454] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194458] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194462] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.766 [2024-05-16 18:36:09.194474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194478] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194481] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.766 [2024-05-16 18:36:09.194493] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194497] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194500] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.766 [2024-05-16 18:36:09.194512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194516] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194519] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.766 [2024-05-16 18:36:09.194546] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194559] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194567] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194571] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.766 [2024-05-16 18:36:09.194597] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c7f0, cid 0, qid 0 00:14:55.766 [2024-05-16 18:36:09.194604] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2c950, cid 1, qid 0 00:14:55.766 [2024-05-16 18:36:09.194609] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cab0, cid 2, qid 0 00:14:55.766 [2024-05-16 18:36:09.194613] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.766 [2024-05-16 18:36:09.194618] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cd70, cid 4, qid 0 00:14:55.766 [2024-05-16 18:36:09.194737] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.766 [2024-05-16 18:36:09.194749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.766 [2024-05-16 18:36:09.194753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194757] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cd70) on tqpair=0xce9a60 00:14:55.766 [2024-05-16 18:36:09.194763] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:55.766 [2024-05-16 18:36:09.194769] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194781] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194789] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194796] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194800] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194804] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.194811] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:55.766 [2024-05-16 18:36:09.194843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cd70, cid 4, qid 0 00:14:55.766 [2024-05-16 18:36:09.194908] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.766 [2024-05-16 18:36:09.194916] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.766 [2024-05-16 18:36:09.194919] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.194924] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cd70) on tqpair=0xce9a60 00:14:55.766 [2024-05-16 18:36:09.194975] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.194995] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.195005] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195009] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.195016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.766 [2024-05-16 18:36:09.195038] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cd70, cid 4, qid 0 00:14:55.766 [2024-05-16 18:36:09.195107] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.766 [2024-05-16 18:36:09.195126] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.766 [2024-05-16 18:36:09.195129] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195133] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=4096, cccid=4 00:14:55.766 [2024-05-16 18:36:09.195138] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2cd70) on tqpair(0xce9a60): expected_datao=0, payload_size=4096 00:14:55.766 [2024-05-16 18:36:09.195142] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195149] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195153] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195187] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.766 [2024-05-16 18:36:09.195195] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.766 [2024-05-16 18:36:09.195198] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195203] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cd70) on tqpair=0xce9a60 00:14:55.766 [2024-05-16 18:36:09.195220] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:55.766 [2024-05-16 18:36:09.195232] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.195244] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.195252] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195257] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.195265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.766 [2024-05-16 18:36:09.195286] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cd70, cid 4, qid 0 00:14:55.766 [2024-05-16 18:36:09.195375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.766 [2024-05-16 18:36:09.195391] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.766 [2024-05-16 18:36:09.195396] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195400] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=4096, cccid=4 00:14:55.766 [2024-05-16 18:36:09.195405] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2cd70) on tqpair(0xce9a60): expected_datao=0, payload_size=4096 00:14:55.766 [2024-05-16 18:36:09.195411] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195419] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195423] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195432] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.766 [2024-05-16 18:36:09.195438] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.766 [2024-05-16 18:36:09.195442] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195446] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cd70) on tqpair=0xce9a60 00:14:55.766 [2024-05-16 18:36:09.195459] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.195481] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:55.766 [2024-05-16 18:36:09.195506] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195525] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce9a60) 00:14:55.766 [2024-05-16 18:36:09.195532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.766 [2024-05-16 18:36:09.195553] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cd70, cid 4, qid 0 00:14:55.766 [2024-05-16 18:36:09.195623] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.766 [2024-05-16 18:36:09.195636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.766 [2024-05-16 18:36:09.195640] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.766 [2024-05-16 18:36:09.195659] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=4096, cccid=4 00:14:55.767 [2024-05-16 18:36:09.195664] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2cd70) on tqpair(0xce9a60): expected_datao=0, payload_size=4096 00:14:55.767 [2024-05-16 18:36:09.195669] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195675] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195679] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195688] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.195693] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.195697] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cd70) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.195715] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:55.767 [2024-05-16 18:36:09.195724] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:55.767 [2024-05-16 18:36:09.195734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:55.767 [2024-05-16 18:36:09.195740] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:55.767 [2024-05-16 18:36:09.195746] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:55.767 [2024-05-16 18:36:09.195752] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:55.767 [2024-05-16 18:36:09.195757] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:55.767 [2024-05-16 18:36:09.195762] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:55.767 [2024-05-16 18:36:09.195782] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195791] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.195799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.195807] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195810] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195814] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.195832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.767 [2024-05-16 18:36:09.195858] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cd70, cid 4, qid 0 00:14:55.767 [2024-05-16 18:36:09.195866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2ced0, cid 5, qid 0 00:14:55.767 [2024-05-16 18:36:09.195940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.195947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.195951] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195955] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cd70) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.195961] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.195967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.195970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195974] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2ced0) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.195985] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.195989] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.195996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.196015] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2ced0, cid 5, qid 0 00:14:55.767 [2024-05-16 18:36:09.196076] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.196082] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.196086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2ced0) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.196099] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196104] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.196110] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.196127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2ced0, cid 5, qid 0 00:14:55.767 [2024-05-16 18:36:09.196193] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.196204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.196208] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196212] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2ced0) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.196222] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.196233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.196250] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2ced0, cid 5, qid 0 00:14:55.767 [2024-05-16 18:36:09.196318] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.196325] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.196328] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196332] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2ced0) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.196346] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196351] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.196357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.196365] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196369] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.196375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.196383] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196387] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.196393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.196407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196412] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xce9a60) 00:14:55.767 [2024-05-16 18:36:09.196418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.767 [2024-05-16 18:36:09.196454] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2ced0, cid 5, qid 0 00:14:55.767 [2024-05-16 18:36:09.196462] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cd70, cid 4, qid 0 00:14:55.767 [2024-05-16 18:36:09.196467] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2d030, cid 6, qid 0 00:14:55.767 [2024-05-16 18:36:09.196471] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2d190, cid 7, qid 0 00:14:55.767 [2024-05-16 18:36:09.196637] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.767 [2024-05-16 18:36:09.196666] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.767 [2024-05-16 18:36:09.196670] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196674] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=8192, cccid=5 00:14:55.767 [2024-05-16 18:36:09.196679] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2ced0) on tqpair(0xce9a60): expected_datao=0, payload_size=8192 00:14:55.767 [2024-05-16 18:36:09.196684] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196700] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196705] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196711] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.767 [2024-05-16 18:36:09.196717] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.767 [2024-05-16 18:36:09.196720] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196724] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=512, cccid=4 00:14:55.767 [2024-05-16 18:36:09.196729] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2cd70) on tqpair(0xce9a60): expected_datao=0, payload_size=512 00:14:55.767 [2024-05-16 18:36:09.196733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196739] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196743] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.767 [2024-05-16 18:36:09.196754] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.767 [2024-05-16 18:36:09.196757] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196760] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=512, cccid=6 00:14:55.767 [2024-05-16 18:36:09.196765] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2d030) on tqpair(0xce9a60): expected_datao=0, payload_size=512 00:14:55.767 [2024-05-16 18:36:09.196769] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196774] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196778] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196783] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:55.767 [2024-05-16 18:36:09.196789] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:55.767 [2024-05-16 18:36:09.196792] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196796] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xce9a60): datao=0, datal=4096, cccid=7 00:14:55.767 [2024-05-16 18:36:09.196800] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd2d190) on tqpair(0xce9a60): expected_datao=0, payload_size=4096 00:14:55.767 [2024-05-16 18:36:09.196804] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196810] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.196814] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.199930] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.199947] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.199951] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.199955] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2ced0) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.199974] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.199981] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 [2024-05-16 18:36:09.199985] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.767 [2024-05-16 18:36:09.199989] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cd70) on tqpair=0xce9a60 00:14:55.767 [2024-05-16 18:36:09.199999] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.767 [2024-05-16 18:36:09.200005] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.767 ===================================================== 00:14:55.767 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.767 ===================================================== 00:14:55.767 Controller Capabilities/Features 00:14:55.767 ================================ 00:14:55.767 Vendor ID: 8086 00:14:55.767 Subsystem Vendor ID: 8086 00:14:55.767 Serial Number: SPDK00000000000001 00:14:55.767 Model Number: SPDK bdev Controller 00:14:55.767 Firmware Version: 24.09 00:14:55.767 Recommended Arb Burst: 6 00:14:55.767 IEEE OUI Identifier: e4 d2 5c 00:14:55.767 Multi-path I/O 00:14:55.767 May have multiple subsystem ports: Yes 00:14:55.767 May have multiple controllers: Yes 00:14:55.767 Associated with SR-IOV VF: No 00:14:55.767 Max Data Transfer Size: 131072 00:14:55.767 Max Number of Namespaces: 32 00:14:55.767 Max Number of I/O Queues: 127 00:14:55.767 NVMe Specification Version (VS): 1.3 00:14:55.767 NVMe Specification Version (Identify): 1.3 00:14:55.767 Maximum Queue Entries: 128 00:14:55.767 Contiguous Queues Required: Yes 00:14:55.767 Arbitration Mechanisms Supported 00:14:55.767 Weighted Round Robin: Not Supported 00:14:55.767 Vendor Specific: Not Supported 00:14:55.768 Reset Timeout: 15000 ms 00:14:55.768 Doorbell Stride: 4 bytes 00:14:55.768 NVM Subsystem Reset: Not Supported 00:14:55.768 Command Sets Supported 00:14:55.768 NVM Command Set: Supported 00:14:55.768 Boot Partition: Not Supported 00:14:55.768 Memory Page Size Minimum: 4096 bytes 00:14:55.768 Memory Page Size Maximum: 4096 bytes 00:14:55.768 Persistent Memory Region: Not Supported 00:14:55.768 Optional Asynchronous Events Supported 00:14:55.768 Namespace Attribute Notices: Supported 00:14:55.768 Firmware Activation Notices: Not Supported 00:14:55.768 ANA Change Notices: Not Supported 00:14:55.768 PLE Aggregate Log Change Notices: Not Supported 00:14:55.768 LBA Status Info Alert Notices: Not Supported 00:14:55.768 EGE Aggregate Log Change Notices: Not Supported 00:14:55.768 Normal NVM Subsystem Shutdown event: Not Supported 00:14:55.768 Zone Descriptor Change Notices: Not Supported 00:14:55.768 Discovery Log Change Notices: Not Supported 00:14:55.768 Controller Attributes 00:14:55.768 128-bit Host Identifier: Supported 00:14:55.768 Non-Operational Permissive Mode: Not Supported 00:14:55.768 NVM Sets: Not Supported 00:14:55.768 Read Recovery Levels: Not Supported 00:14:55.768 Endurance Groups: Not Supported 00:14:55.768 Predictable Latency Mode: Not Supported 00:14:55.768 Traffic Based Keep ALive: Not Supported 00:14:55.768 Namespace Granularity: Not Supported 00:14:55.768 SQ Associations: Not Supported 00:14:55.768 UUID List: Not Supported 00:14:55.768 Multi-Domain Subsystem: Not Supported 00:14:55.768 Fixed Capacity Management: Not Supported 00:14:55.768 Variable Capacity Management: Not Supported 00:14:55.768 Delete Endurance Group: Not Supported 00:14:55.768 Delete NVM Set: Not Supported 00:14:55.768 Extended LBA Formats Supported: Not Supported 00:14:55.768 Flexible Data Placement Supported: Not Supported 00:14:55.768 00:14:55.768 Controller Memory Buffer Support 00:14:55.768 ================================ 00:14:55.768 Supported: No 00:14:55.768 00:14:55.768 Persistent Memory Region Support 00:14:55.768 ================================ 00:14:55.768 Supported: No 00:14:55.768 00:14:55.768 Admin Command Set Attributes 00:14:55.768 ============================ 00:14:55.768 Security Send/Receive: Not Supported 00:14:55.768 Format NVM: Not Supported 00:14:55.768 Firmware Activate/Download: Not Supported 00:14:55.768 Namespace Management: Not Supported 00:14:55.768 Device Self-Test: Not Supported 00:14:55.768 Directives: Not Supported 00:14:55.768 NVMe-MI: Not Supported 00:14:55.768 Virtualization Management: Not Supported 00:14:55.768 Doorbell Buffer Config: Not Supported 00:14:55.768 Get LBA Status Capability: Not Supported 00:14:55.768 Command & Feature Lockdown Capability: Not Supported 00:14:55.768 Abort Command Limit: 4 00:14:55.768 Async Event Request Limit: 4 00:14:55.768 Number of Firmware Slots: N/A 00:14:55.768 Firmware Slot 1 Read-Only: N/A 00:14:55.768 Firmware Activation Without Reset: [2024-05-16 18:36:09.200009] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.768 [2024-05-16 18:36:09.200012] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2d030) on tqpair=0xce9a60 00:14:55.768 [2024-05-16 18:36:09.200022] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.768 [2024-05-16 18:36:09.200028] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.768 [2024-05-16 18:36:09.200032] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.768 [2024-05-16 18:36:09.200035] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2d190) on tqpair=0xce9a60 00:14:55.768 N/A 00:14:55.768 Multiple Update Detection Support: N/A 00:14:55.768 Firmware Update Granularity: No Information Provided 00:14:55.768 Per-Namespace SMART Log: No 00:14:55.768 Asymmetric Namespace Access Log Page: Not Supported 00:14:55.768 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:55.768 Command Effects Log Page: Supported 00:14:55.768 Get Log Page Extended Data: Supported 00:14:55.768 Telemetry Log Pages: Not Supported 00:14:55.768 Persistent Event Log Pages: Not Supported 00:14:55.768 Supported Log Pages Log Page: May Support 00:14:55.768 Commands Supported & Effects Log Page: Not Supported 00:14:55.768 Feature Identifiers & Effects Log Page:May Support 00:14:55.768 NVMe-MI Commands & Effects Log Page: May Support 00:14:55.768 Data Area 4 for Telemetry Log: Not Supported 00:14:55.768 Error Log Page Entries Supported: 128 00:14:55.768 Keep Alive: Supported 00:14:55.768 Keep Alive Granularity: 10000 ms 00:14:55.768 00:14:55.768 NVM Command Set Attributes 00:14:55.768 ========================== 00:14:55.768 Submission Queue Entry Size 00:14:55.768 Max: 64 00:14:55.768 Min: 64 00:14:55.768 Completion Queue Entry Size 00:14:55.768 Max: 16 00:14:55.768 Min: 16 00:14:55.768 Number of Namespaces: 32 00:14:55.768 Compare Command: Supported 00:14:55.768 Write Uncorrectable Command: Not Supported 00:14:55.768 Dataset Management Command: Supported 00:14:55.768 Write Zeroes Command: Supported 00:14:55.768 Set Features Save Field: Not Supported 00:14:55.768 Reservations: Supported 00:14:55.768 Timestamp: Not Supported 00:14:55.768 Copy: Supported 00:14:55.768 Volatile Write Cache: Present 00:14:55.768 Atomic Write Unit (Normal): 1 00:14:55.768 Atomic Write Unit (PFail): 1 00:14:55.768 Atomic Compare & Write Unit: 1 00:14:55.768 Fused Compare & Write: Supported 00:14:55.768 Scatter-Gather List 00:14:55.768 SGL Command Set: Supported 00:14:55.768 SGL Keyed: Supported 00:14:55.768 SGL Bit Bucket Descriptor: Not Supported 00:14:55.768 SGL Metadata Pointer: Not Supported 00:14:55.768 Oversized SGL: Not Supported 00:14:55.768 SGL Metadata Address: Not Supported 00:14:55.768 SGL Offset: Supported 00:14:55.768 Transport SGL Data Block: Not Supported 00:14:55.768 Replay Protected Memory Block: Not Supported 00:14:55.768 00:14:55.768 Firmware Slot Information 00:14:55.768 ========================= 00:14:55.768 Active slot: 1 00:14:55.768 Slot 1 Firmware Revision: 24.09 00:14:55.768 00:14:55.768 00:14:55.768 Commands Supported and Effects 00:14:55.768 ============================== 00:14:55.768 Admin Commands 00:14:55.768 -------------- 00:14:55.768 Get Log Page (02h): Supported 00:14:55.768 Identify (06h): Supported 00:14:55.768 Abort (08h): Supported 00:14:55.768 Set Features (09h): Supported 00:14:55.768 Get Features (0Ah): Supported 00:14:55.768 Asynchronous Event Request (0Ch): Supported 00:14:55.768 Keep Alive (18h): Supported 00:14:55.768 I/O Commands 00:14:55.768 ------------ 00:14:55.768 Flush (00h): Supported LBA-Change 00:14:55.768 Write (01h): Supported LBA-Change 00:14:55.768 Read (02h): Supported 00:14:55.768 Compare (05h): Supported 00:14:55.768 Write Zeroes (08h): Supported LBA-Change 00:14:55.768 Dataset Management (09h): Supported LBA-Change 00:14:55.768 Copy (19h): Supported LBA-Change 00:14:55.768 Unknown (79h): Supported LBA-Change 00:14:55.768 Unknown (7Ah): Supported 00:14:55.768 00:14:55.768 Error Log 00:14:55.768 ========= 00:14:55.768 00:14:55.768 Arbitration 00:14:55.768 =========== 00:14:55.768 Arbitration Burst: 1 00:14:55.768 00:14:55.768 Power Management 00:14:55.768 ================ 00:14:55.768 Number of Power States: 1 00:14:55.768 Current Power State: Power State #0 00:14:55.768 Power State #0: 00:14:55.768 Max Power: 0.00 W 00:14:55.768 Non-Operational State: Operational 00:14:55.768 Entry Latency: Not Reported 00:14:55.768 Exit Latency: Not Reported 00:14:55.768 Relative Read Throughput: 0 00:14:55.768 Relative Read Latency: 0 00:14:55.768 Relative Write Throughput: 0 00:14:55.768 Relative Write Latency: 0 00:14:55.768 Idle Power: Not Reported 00:14:55.768 Active Power: Not Reported 00:14:55.768 Non-Operational Permissive Mode: Not Supported 00:14:55.768 00:14:55.768 Health Information 00:14:55.768 ================== 00:14:55.768 Critical Warnings: 00:14:55.768 Available Spare Space: OK 00:14:55.768 Temperature: OK 00:14:55.768 Device Reliability: OK 00:14:55.768 Read Only: No 00:14:55.768 Volatile Memory Backup: OK 00:14:55.768 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:55.768 Temperature Threshold: [2024-05-16 18:36:09.200142] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.768 [2024-05-16 18:36:09.200151] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xce9a60) 00:14:55.768 [2024-05-16 18:36:09.200160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.768 [2024-05-16 18:36:09.200186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2d190, cid 7, qid 0 00:14:55.768 [2024-05-16 18:36:09.200278] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.768 [2024-05-16 18:36:09.200300] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.768 [2024-05-16 18:36:09.200304] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.768 [2024-05-16 18:36:09.200308] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2d190) on tqpair=0xce9a60 00:14:55.768 [2024-05-16 18:36:09.200341] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:55.768 [2024-05-16 18:36:09.200355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.768 [2024-05-16 18:36:09.200362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.768 [2024-05-16 18:36:09.200368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.769 [2024-05-16 18:36:09.200374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.769 [2024-05-16 18:36:09.200383] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200391] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.200399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.200420] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.200484] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.200498] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.200501] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200505] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.200513] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200517] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200521] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.200528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.200549] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.200645] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.200668] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.200672] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200676] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.200681] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:55.769 [2024-05-16 18:36:09.200686] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:55.769 [2024-05-16 18:36:09.200695] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200703] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.200710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.200727] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.200789] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.200796] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.200799] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200803] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.200814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200822] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.200828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.200863] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.200940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.200954] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.200959] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200963] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.200974] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200979] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.200983] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.200991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201067] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201074] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201078] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201082] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201092] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201097] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.201108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201127] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201190] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201194] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201198] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201224] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201228] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201247] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.201254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201271] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201333] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201344] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201348] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201352] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201362] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201367] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201370] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.201377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201461] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201464] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201468] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201482] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201485] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.201492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201508] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201583] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201590] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201593] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201611] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201615] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.201622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201639] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201707] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201711] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201721] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201725] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201729] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.201735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201752] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201816] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201823] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201849] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201865] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201869] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.201877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.201896] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.769 [2024-05-16 18:36:09.201967] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.769 [2024-05-16 18:36:09.201974] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.769 [2024-05-16 18:36:09.201977] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201981] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.769 [2024-05-16 18:36:09.201992] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.201996] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.769 [2024-05-16 18:36:09.202000] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.769 [2024-05-16 18:36:09.202007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.769 [2024-05-16 18:36:09.202025] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.202082] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.202089] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.202092] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202096] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.202106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202110] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202114] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.202121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.202138] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.202195] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.202202] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.202205] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202209] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.202219] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202224] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202228] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.202249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.202266] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.202334] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.202346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.202350] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202353] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.202364] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202368] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202372] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.202379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.202396] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.202446] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.202457] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.202461] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202464] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.202475] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202479] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202482] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.202506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.202523] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.202592] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.202600] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.202603] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202607] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.202618] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202622] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202626] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.202633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.202650] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.202710] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.202716] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.202720] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202724] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.202734] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202739] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202742] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.202749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.202767] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.202858] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.202864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.202882] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202886] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.202898] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202903] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.202907] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.202929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.202949] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.203005] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.203012] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.203015] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203019] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.203029] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203034] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203037] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.203044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.203063] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.203121] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.203127] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.203131] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203135] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.203145] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203149] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203153] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.203185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.203206] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.203262] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.203269] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.203273] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203277] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.203288] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203294] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203298] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.203305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.203323] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.203379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.203386] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.203390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.203406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203410] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203414] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.203421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.203439] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.203515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.203521] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.203525] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203529] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.203539] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203543] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203546] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.203553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.203570] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.203639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.203661] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.203664] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203668] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.203678] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203686] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.203693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.203710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.203762] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.203773] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.203777] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203781] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.203792] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203796] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.203800] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.203807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.206906] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.206931] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.206938] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.206942] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.206946] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.770 [2024-05-16 18:36:09.206960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.206965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.206969] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xce9a60) 00:14:55.770 [2024-05-16 18:36:09.206977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.770 [2024-05-16 18:36:09.207000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd2cc10, cid 3, qid 0 00:14:55.770 [2024-05-16 18:36:09.207060] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:55.770 [2024-05-16 18:36:09.207067] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:55.770 [2024-05-16 18:36:09.207070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:55.770 [2024-05-16 18:36:09.207074] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd2cc10) on tqpair=0xce9a60 00:14:55.771 [2024-05-16 18:36:09.207082] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:14:55.771 0 Kelvin (-273 Celsius) 00:14:55.771 Available Spare: 0% 00:14:55.771 Available Spare Threshold: 0% 00:14:55.771 Life Percentage Used: 0% 00:14:55.771 Data Units Read: 0 00:14:55.771 Data Units Written: 0 00:14:55.771 Host Read Commands: 0 00:14:55.771 Host Write Commands: 0 00:14:55.771 Controller Busy Time: 0 minutes 00:14:55.771 Power Cycles: 0 00:14:55.771 Power On Hours: 0 hours 00:14:55.771 Unsafe Shutdowns: 0 00:14:55.771 Unrecoverable Media Errors: 0 00:14:55.771 Lifetime Error Log Entries: 0 00:14:55.771 Warning Temperature Time: 0 minutes 00:14:55.771 Critical Temperature Time: 0 minutes 00:14:55.771 00:14:55.771 Number of Queues 00:14:55.771 ================ 00:14:55.771 Number of I/O Submission Queues: 127 00:14:55.771 Number of I/O Completion Queues: 127 00:14:55.771 00:14:55.771 Active Namespaces 00:14:55.771 ================= 00:14:55.771 Namespace ID:1 00:14:55.771 Error Recovery Timeout: Unlimited 00:14:55.771 Command Set Identifier: NVM (00h) 00:14:55.771 Deallocate: Supported 00:14:55.771 Deallocated/Unwritten Error: Not Supported 00:14:55.771 Deallocated Read Value: Unknown 00:14:55.771 Deallocate in Write Zeroes: Not Supported 00:14:55.771 Deallocated Guard Field: 0xFFFF 00:14:55.771 Flush: Supported 00:14:55.771 Reservation: Supported 00:14:55.771 Namespace Sharing Capabilities: Multiple Controllers 00:14:55.771 Size (in LBAs): 131072 (0GiB) 00:14:55.771 Capacity (in LBAs): 131072 (0GiB) 00:14:55.771 Utilization (in LBAs): 131072 (0GiB) 00:14:55.771 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:55.771 EUI64: ABCDEF0123456789 00:14:55.771 UUID: 3c7250fe-8e19-4875-a9b0-29b1e819f3ee 00:14:55.771 Thin Provisioning: Not Supported 00:14:55.771 Per-NS Atomic Units: Yes 00:14:55.771 Atomic Boundary Size (Normal): 0 00:14:55.771 Atomic Boundary Size (PFail): 0 00:14:55.771 Atomic Boundary Offset: 0 00:14:55.771 Maximum Single Source Range Length: 65535 00:14:55.771 Maximum Copy Length: 65535 00:14:55.771 Maximum Source Range Count: 1 00:14:55.771 NGUID/EUI64 Never Reused: No 00:14:55.771 Namespace Write Protected: No 00:14:55.771 Number of LBA Formats: 1 00:14:55.771 Current LBA Format: LBA Format #00 00:14:55.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:55.771 00:14:55.771 18:36:09 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.029 rmmod nvme_tcp 00:14:56.029 rmmod nvme_fabrics 00:14:56.029 rmmod nvme_keyring 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 74836 ']' 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 74836 00:14:56.029 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 74836 ']' 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 74836 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74836 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:56.030 killing process with pid 74836 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74836' 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 74836 00:14:56.030 [2024-05-16 18:36:09.384506] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:56.030 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 74836 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:56.288 00:14:56.288 real 0m2.654s 00:14:56.288 user 0m7.412s 00:14:56.288 sys 0m0.730s 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:56.288 ************************************ 00:14:56.288 END TEST nvmf_identify 00:14:56.288 ************************************ 00:14:56.288 18:36:09 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:56.288 18:36:09 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:56.288 18:36:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:56.288 18:36:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.288 18:36:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:56.288 ************************************ 00:14:56.288 START TEST nvmf_perf 00:14:56.288 ************************************ 00:14:56.288 18:36:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:56.547 * Looking for test storage... 00:14:56.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.547 18:36:09 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:56.548 Cannot find device "nvmf_tgt_br" 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:56.548 Cannot find device "nvmf_tgt_br2" 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:56.548 Cannot find device "nvmf_tgt_br" 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:56.548 Cannot find device "nvmf_tgt_br2" 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:56.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:56.548 18:36:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:56.548 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:56.548 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:56.548 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:56.548 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:56.548 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:56.548 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:56.548 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:56.548 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:56.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:56.807 00:14:56.807 --- 10.0.0.2 ping statistics --- 00:14:56.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.807 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:56.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:56.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:56.807 00:14:56.807 --- 10.0.0.3 ping statistics --- 00:14:56.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.807 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:56.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:56.807 00:14:56.807 --- 10.0.0.1 ping statistics --- 00:14:56.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.807 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75046 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75046 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 75046 ']' 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.807 18:36:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:56.807 [2024-05-16 18:36:10.297588] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:14:56.807 [2024-05-16 18:36:10.297731] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.065 [2024-05-16 18:36:10.441765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.322 [2024-05-16 18:36:10.589636] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.322 [2024-05-16 18:36:10.589714] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.323 [2024-05-16 18:36:10.589734] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.323 [2024-05-16 18:36:10.589744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.323 [2024-05-16 18:36:10.589751] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.323 [2024-05-16 18:36:10.590518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.323 [2024-05-16 18:36:10.590732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.323 [2024-05-16 18:36:10.590648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.323 [2024-05-16 18:36:10.590727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.323 [2024-05-16 18:36:10.673383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:57.889 18:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:58.455 18:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:58.455 18:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:58.713 18:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:58.713 18:36:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.972 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:58.972 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:58.972 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:58.972 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:58.972 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:59.230 [2024-05-16 18:36:12.592216] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:59.230 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:59.490 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:59.490 18:36:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:59.748 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:59.748 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:00.008 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.275 [2024-05-16 18:36:13.626339] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:00.275 [2024-05-16 18:36:13.627255] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.275 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:00.592 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:00.592 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:00.592 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:00.592 18:36:13 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:01.526 Initializing NVMe Controllers 00:15:01.526 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:01.526 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:01.526 Initialization complete. Launching workers. 00:15:01.526 ======================================================== 00:15:01.526 Latency(us) 00:15:01.526 Device Information : IOPS MiB/s Average min max 00:15:01.526 PCIE (0000:00:10.0) NSID 1 from core 0: 22044.05 86.11 1452.01 297.99 8126.63 00:15:01.526 ======================================================== 00:15:01.526 Total : 22044.05 86.11 1452.01 297.99 8126.63 00:15:01.526 00:15:01.526 18:36:15 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:02.902 Initializing NVMe Controllers 00:15:02.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:02.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:02.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:02.902 Initialization complete. Launching workers. 00:15:02.902 ======================================================== 00:15:02.902 Latency(us) 00:15:02.902 Device Information : IOPS MiB/s Average min max 00:15:02.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3046.65 11.90 326.62 108.65 7157.94 00:15:02.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.86 0.49 8071.87 6926.98 11982.22 00:15:02.903 ======================================================== 00:15:02.903 Total : 3171.51 12.39 631.55 108.65 11982.22 00:15:02.903 00:15:02.903 18:36:16 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:04.279 Initializing NVMe Controllers 00:15:04.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:04.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:04.279 Initialization complete. Launching workers. 00:15:04.279 ======================================================== 00:15:04.279 Latency(us) 00:15:04.279 Device Information : IOPS MiB/s Average min max 00:15:04.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8217.84 32.10 3893.66 539.00 11145.80 00:15:04.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3875.21 15.14 8279.81 6284.16 16028.15 00:15:04.279 ======================================================== 00:15:04.279 Total : 12093.05 47.24 5299.20 539.00 16028.15 00:15:04.279 00:15:04.279 18:36:17 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:04.279 18:36:17 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:06.821 Initializing NVMe Controllers 00:15:06.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.821 Controller IO queue size 128, less than required. 00:15:06.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.821 Controller IO queue size 128, less than required. 00:15:06.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:06.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:06.821 Initialization complete. Launching workers. 00:15:06.821 ======================================================== 00:15:06.821 Latency(us) 00:15:06.821 Device Information : IOPS MiB/s Average min max 00:15:06.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1412.71 353.18 93072.94 49677.13 148941.01 00:15:06.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 615.35 153.84 212403.98 83141.09 330172.91 00:15:06.821 ======================================================== 00:15:06.821 Total : 2028.06 507.02 129280.09 49677.13 330172.91 00:15:06.821 00:15:06.821 18:36:20 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:06.821 Initializing NVMe Controllers 00:15:06.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:06.821 Controller IO queue size 128, less than required. 00:15:06.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.821 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:06.821 Controller IO queue size 128, less than required. 00:15:06.821 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:06.821 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:06.821 WARNING: Some requested NVMe devices were skipped 00:15:06.821 No valid NVMe controllers or AIO or URING devices found 00:15:07.080 18:36:20 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:09.625 Initializing NVMe Controllers 00:15:09.625 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:09.625 Controller IO queue size 128, less than required. 00:15:09.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:09.625 Controller IO queue size 128, less than required. 00:15:09.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:09.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:09.625 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:09.625 Initialization complete. Launching workers. 00:15:09.625 00:15:09.625 ==================== 00:15:09.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:09.625 TCP transport: 00:15:09.625 polls: 10771 00:15:09.625 idle_polls: 7669 00:15:09.625 sock_completions: 3102 00:15:09.625 nvme_completions: 5211 00:15:09.625 submitted_requests: 7902 00:15:09.625 queued_requests: 1 00:15:09.625 00:15:09.625 ==================== 00:15:09.625 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:09.625 TCP transport: 00:15:09.625 polls: 11188 00:15:09.625 idle_polls: 8229 00:15:09.625 sock_completions: 2959 00:15:09.625 nvme_completions: 5271 00:15:09.625 submitted_requests: 8002 00:15:09.625 queued_requests: 1 00:15:09.625 ======================================================== 00:15:09.625 Latency(us) 00:15:09.625 Device Information : IOPS MiB/s Average min max 00:15:09.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1301.14 325.28 100030.01 50431.45 184865.39 00:15:09.625 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1316.12 329.03 98222.31 49126.12 151272.88 00:15:09.625 ======================================================== 00:15:09.625 Total : 2617.26 654.31 99120.99 49126.12 184865.39 00:15:09.625 00:15:09.625 18:36:22 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:09.625 18:36:22 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:09.905 rmmod nvme_tcp 00:15:09.905 rmmod nvme_fabrics 00:15:09.905 rmmod nvme_keyring 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75046 ']' 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75046 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 75046 ']' 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 75046 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75046 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75046' 00:15:09.905 killing process with pid 75046 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 75046 00:15:09.905 [2024-05-16 18:36:23.310495] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:09.905 18:36:23 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 75046 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:10.841 00:15:10.841 real 0m14.385s 00:15:10.841 user 0m52.888s 00:15:10.841 sys 0m4.151s 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.841 18:36:24 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:10.842 ************************************ 00:15:10.842 END TEST nvmf_perf 00:15:10.842 ************************************ 00:15:10.842 18:36:24 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:10.842 18:36:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:10.842 18:36:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:10.842 18:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:10.842 ************************************ 00:15:10.842 START TEST nvmf_fio_host 00:15:10.842 ************************************ 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:10.842 * Looking for test storage... 00:15:10.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:15:10.842 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:10.843 Cannot find device "nvmf_tgt_br" 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:10.843 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:10.843 Cannot find device "nvmf_tgt_br2" 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:11.102 Cannot find device "nvmf_tgt_br" 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:11.102 Cannot find device "nvmf_tgt_br2" 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.102 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:11.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:15:11.364 00:15:11.364 --- 10.0.0.2 ping statistics --- 00:15:11.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.364 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:11.364 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.364 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:15:11.364 00:15:11.364 --- 10.0.0.3 ping statistics --- 00:15:11.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.364 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:11.364 00:15:11.364 --- 10.0.0.1 ping statistics --- 00:15:11.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.364 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=75449 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 75449 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 75449 ']' 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.364 18:36:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:11.364 [2024-05-16 18:36:24.732112] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:15:11.364 [2024-05-16 18:36:24.732231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.624 [2024-05-16 18:36:24.875496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.624 [2024-05-16 18:36:25.024792] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.624 [2024-05-16 18:36:25.024861] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.624 [2024-05-16 18:36:25.024874] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.624 [2024-05-16 18:36:25.024883] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.624 [2024-05-16 18:36:25.024891] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.624 [2024-05-16 18:36:25.025652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.624 [2024-05-16 18:36:25.025812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.624 [2024-05-16 18:36:25.026334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.624 [2024-05-16 18:36:25.026374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.624 [2024-05-16 18:36:25.102482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.560 [2024-05-16 18:36:25.728952] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.560 Malloc1 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.560 [2024-05-16 18:36:25.842981] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:12.560 [2024-05-16 18:36:25.844026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:15:12.560 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:12.561 18:36:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:12.561 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:12.561 fio-3.35 00:15:12.561 Starting 1 thread 00:15:15.092 00:15:15.092 test: (groupid=0, jobs=1): err= 0: pid=75504: Thu May 16 18:36:28 2024 00:15:15.092 read: IOPS=8416, BW=32.9MiB/s (34.5MB/s)(66.0MiB/2007msec) 00:15:15.092 slat (usec): min=2, max=354, avg= 2.79, stdev= 3.61 00:15:15.092 clat (usec): min=2705, max=14698, avg=7913.41, stdev=704.06 00:15:15.092 lat (usec): min=2741, max=14700, avg=7916.20, stdev=703.91 00:15:15.092 clat percentiles (usec): 00:15:15.092 | 1.00th=[ 6718], 5.00th=[ 7046], 10.00th=[ 7177], 20.00th=[ 7439], 00:15:15.092 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:15:15.092 | 70.00th=[ 8094], 80.00th=[ 8356], 90.00th=[ 8586], 95.00th=[ 9110], 00:15:15.092 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11731], 99.95th=[13173], 00:15:15.092 | 99.99th=[14615] 00:15:15.092 bw ( KiB/s): min=32456, max=34488, per=100.00%, avg=33664.00, stdev=858.77, samples=4 00:15:15.092 iops : min= 8114, max= 8622, avg=8416.00, stdev=214.69, samples=4 00:15:15.092 write: IOPS=8413, BW=32.9MiB/s (34.5MB/s)(66.0MiB/2007msec); 0 zone resets 00:15:15.092 slat (usec): min=2, max=293, avg= 2.91, stdev= 2.77 00:15:15.092 clat (usec): min=2539, max=13426, avg=7236.79, stdev=651.21 00:15:15.092 lat (usec): min=2554, max=13429, avg=7239.69, stdev=651.16 00:15:15.092 clat percentiles (usec): 00:15:15.092 | 1.00th=[ 6063], 5.00th=[ 6456], 10.00th=[ 6587], 20.00th=[ 6783], 00:15:15.092 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7242], 00:15:15.092 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8356], 00:15:15.092 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[11863], 99.95th=[12780], 00:15:15.092 | 99.99th=[13435] 00:15:15.092 bw ( KiB/s): min=33288, max=33984, per=99.95%, avg=33634.00, stdev=368.83, samples=4 00:15:15.092 iops : min= 8322, max= 8496, avg=8408.50, stdev=92.21, samples=4 00:15:15.092 lat (msec) : 4=0.07%, 10=98.77%, 20=1.16% 00:15:15.092 cpu : usr=65.05%, sys=26.02%, ctx=20, majf=0, minf=5 00:15:15.092 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:15.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.092 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:15.092 issued rwts: total=16891,16885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.092 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:15.092 00:15:15.092 Run status group 0 (all jobs): 00:15:15.092 READ: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=66.0MiB (69.2MB), run=2007-2007msec 00:15:15.092 WRITE: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=66.0MiB (69.2MB), run=2007-2007msec 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:15.093 18:36:28 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:15.093 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:15.093 fio-3.35 00:15:15.093 Starting 1 thread 00:15:17.623 00:15:17.623 test: (groupid=0, jobs=1): err= 0: pid=75553: Thu May 16 18:36:30 2024 00:15:17.623 read: IOPS=7976, BW=125MiB/s (131MB/s)(250MiB/2007msec) 00:15:17.623 slat (usec): min=2, max=137, avg= 3.97, stdev= 2.30 00:15:17.623 clat (usec): min=2733, max=19034, avg=9031.31, stdev=2876.17 00:15:17.623 lat (usec): min=2736, max=19037, avg=9035.28, stdev=2876.21 00:15:17.623 clat percentiles (usec): 00:15:17.623 | 1.00th=[ 4113], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6390], 00:15:17.623 | 30.00th=[ 7242], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[ 9503], 00:15:17.623 | 70.00th=[10421], 80.00th=[11207], 90.00th=[13042], 95.00th=[14353], 00:15:17.623 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18482], 99.95th=[19006], 00:15:17.623 | 99.99th=[19006] 00:15:17.623 bw ( KiB/s): min=58496, max=69280, per=50.46%, avg=64400.00, stdev=4532.61, samples=4 00:15:17.623 iops : min= 3656, max= 4330, avg=4025.00, stdev=283.29, samples=4 00:15:17.623 write: IOPS=4475, BW=69.9MiB/s (73.3MB/s)(132MiB/1888msec); 0 zone resets 00:15:17.623 slat (usec): min=32, max=348, avg=40.12, stdev= 8.42 00:15:17.623 clat (usec): min=6160, max=23505, avg=12562.73, stdev=2499.64 00:15:17.623 lat (usec): min=6201, max=23542, avg=12602.85, stdev=2500.44 00:15:17.623 clat percentiles (usec): 00:15:17.623 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:15:17.623 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12256], 60.00th=[13042], 00:15:17.623 | 70.00th=[13829], 80.00th=[14746], 90.00th=[15795], 95.00th=[16909], 00:15:17.623 | 99.00th=[19268], 99.50th=[19792], 99.90th=[21890], 99.95th=[22152], 00:15:17.623 | 99.99th=[23462] 00:15:17.623 bw ( KiB/s): min=60896, max=72800, per=93.77%, avg=67144.00, stdev=4945.24, samples=4 00:15:17.623 iops : min= 3806, max= 4550, avg=4196.50, stdev=309.08, samples=4 00:15:17.623 lat (msec) : 4=0.48%, 10=47.41%, 20=51.99%, 50=0.11% 00:15:17.623 cpu : usr=83.35%, sys=12.51%, ctx=6, majf=0, minf=10 00:15:17.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:17.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:17.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:17.623 issued rwts: total=16008,8449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:17.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:17.623 00:15:17.623 Run status group 0 (all jobs): 00:15:17.623 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=250MiB (262MB), run=2007-2007msec 00:15:17.623 WRITE: bw=69.9MiB/s (73.3MB/s), 69.9MiB/s-69.9MiB/s (73.3MB/s-73.3MB/s), io=132MiB (138MB), run=1888-1888msec 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.623 rmmod nvme_tcp 00:15:17.623 rmmod nvme_fabrics 00:15:17.623 rmmod nvme_keyring 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75449 ']' 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75449 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 75449 ']' 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 75449 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75449 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:17.623 killing process with pid 75449 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75449' 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 75449 00:15:17.623 [2024-05-16 18:36:30.969399] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:17.623 18:36:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 75449 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:17.882 00:15:17.882 real 0m7.175s 00:15:17.882 user 0m27.566s 00:15:17.882 sys 0m2.288s 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:17.882 18:36:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:17.882 ************************************ 00:15:17.882 END TEST nvmf_fio_host 00:15:17.882 ************************************ 00:15:18.140 18:36:31 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:18.140 18:36:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:18.140 18:36:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:18.140 18:36:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:18.140 ************************************ 00:15:18.140 START TEST nvmf_failover 00:15:18.140 ************************************ 00:15:18.140 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:18.140 * Looking for test storage... 00:15:18.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:18.140 18:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:18.140 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:18.141 Cannot find device "nvmf_tgt_br" 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:18.141 Cannot find device "nvmf_tgt_br2" 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:18.141 Cannot find device "nvmf_tgt_br" 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:18.141 Cannot find device "nvmf_tgt_br2" 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:18.141 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:18.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:18.399 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:18.399 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:18.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:18.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:18.400 00:15:18.400 --- 10.0.0.2 ping statistics --- 00:15:18.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.400 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:18.400 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:18.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:18.400 00:15:18.400 --- 10.0.0.3 ping statistics --- 00:15:18.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.400 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:18.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:18.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:18.400 00:15:18.400 --- 10.0.0.1 ping statistics --- 00:15:18.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:18.400 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:18.400 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=75766 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 75766 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 75766 ']' 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:18.659 18:36:31 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:18.659 [2024-05-16 18:36:31.979656] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:15:18.659 [2024-05-16 18:36:31.979869] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.659 [2024-05-16 18:36:32.126435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:18.917 [2024-05-16 18:36:32.284197] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.917 [2024-05-16 18:36:32.284269] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.917 [2024-05-16 18:36:32.284285] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:18.917 [2024-05-16 18:36:32.284297] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:18.917 [2024-05-16 18:36:32.284307] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.917 [2024-05-16 18:36:32.284457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.917 [2024-05-16 18:36:32.284604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.917 [2024-05-16 18:36:32.284609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.917 [2024-05-16 18:36:32.360962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:19.579 18:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:19.579 18:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:15:19.579 18:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.579 18:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.579 18:36:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:19.579 18:36:32 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.579 18:36:32 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.837 [2024-05-16 18:36:33.233712] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.837 18:36:33 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:20.095 Malloc0 00:15:20.095 18:36:33 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.352 18:36:33 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.610 18:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.868 [2024-05-16 18:36:34.258678] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:20.868 [2024-05-16 18:36:34.259026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.868 18:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:21.126 [2024-05-16 18:36:34.487137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:21.126 18:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:21.384 [2024-05-16 18:36:34.775412] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75828 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75828 /var/tmp/bdevperf.sock 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 75828 ']' 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:21.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:21.384 18:36:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:22.757 18:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:22.757 18:36:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:15:22.757 18:36:35 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.757 NVMe0n1 00:15:22.757 18:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:23.015 00:15:23.015 18:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75847 00:15:23.015 18:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:23.015 18:36:36 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:24.386 18:36:37 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.386 18:36:37 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:27.670 18:36:40 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:27.670 00:15:27.670 18:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:27.928 18:36:41 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:31.260 18:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:31.260 [2024-05-16 18:36:44.662875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:31.260 18:36:44 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:32.197 18:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:32.456 18:36:45 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 75847 00:15:39.056 0 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 75828 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 75828 ']' 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 75828 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75828 00:15:39.056 killing process with pid 75828 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75828' 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 75828 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 75828 00:15:39.056 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:39.056 [2024-05-16 18:36:34.839951] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:15:39.056 [2024-05-16 18:36:34.840078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75828 ] 00:15:39.056 [2024-05-16 18:36:34.976234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.056 [2024-05-16 18:36:35.120451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.056 [2024-05-16 18:36:35.173158] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:39.056 Running I/O for 15 seconds... 00:15:39.056 [2024-05-16 18:36:37.711684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.711771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.711802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.711832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.711851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.711865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.711880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.711894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.711909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.711923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.711938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.711952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.711967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.711980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.711995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.056 [2024-05-16 18:36:37.712391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.056 [2024-05-16 18:36:37.712404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.712888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.712924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.712952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.712981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.712996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.057 [2024-05-16 18:36:37.713546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.057 [2024-05-16 18:36:37.713589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.057 [2024-05-16 18:36:37.713602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.713965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.713980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.713994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.058 [2024-05-16 18:36:37.714663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.714697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.714725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.714752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.714787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.058 [2024-05-16 18:36:37.714802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.058 [2024-05-16 18:36:37.714815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.714841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.059 [2024-05-16 18:36:37.714856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.714871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.059 [2024-05-16 18:36:37.714884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.714899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.059 [2024-05-16 18:36:37.714912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.714927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.714941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.714957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.714970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.714985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.714999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.059 [2024-05-16 18:36:37.715363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715378] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda4850 is same with the state(5) to be set 00:15:39.059 [2024-05-16 18:36:37.715396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75896 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76352 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.059 [2024-05-16 18:36:37.715794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.059 [2024-05-16 18:36:37.715804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:15:39.059 [2024-05-16 18:36:37.715816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.715893] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xda4850 was disconnected and freed. reset controller. 00:15:39.059 [2024-05-16 18:36:37.715911] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:39.059 [2024-05-16 18:36:37.715981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.059 [2024-05-16 18:36:37.716001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.716017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.059 [2024-05-16 18:36:37.716032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.716056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.059 [2024-05-16 18:36:37.716070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.716083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.059 [2024-05-16 18:36:37.716096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.059 [2024-05-16 18:36:37.716109] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:39.060 [2024-05-16 18:36:37.716181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd44020 (9): Bad file descriptor 00:15:39.060 [2024-05-16 18:36:37.720113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:39.060 [2024-05-16 18:36:37.752921] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:39.060 [2024-05-16 18:36:41.383721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.383797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.383834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.383851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.383867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.383879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.383894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.383906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.383921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.383933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.383948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.383960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.383975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:75992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.060 [2024-05-16 18:36:41.384439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.060 [2024-05-16 18:36:41.384691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.060 [2024-05-16 18:36:41.384704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.384732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.384759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.384794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.384821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.384858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.384885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.384911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.384936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.384962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.384976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.384989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.061 [2024-05-16 18:36:41.385675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.061 [2024-05-16 18:36:41.385873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.061 [2024-05-16 18:36:41.385894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.385916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.385929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.385942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.385955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.385969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.385981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.385994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.062 [2024-05-16 18:36:41.386761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.386985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.062 [2024-05-16 18:36:41.386997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.062 [2024-05-16 18:36:41.387015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:41.387028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:41.387054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:41.387080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:41.387106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:41.387131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:41.387157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:41.387194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387207] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc5cf0 is same with the state(5) to be set 00:15:39.063 [2024-05-16 18:36:41.387222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75872 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76416 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387455] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76424 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76432 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76440 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.063 [2024-05-16 18:36:41.387594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.063 [2024-05-16 18:36:41.387603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76448 len:8 PRP1 0x0 PRP2 0x0 00:15:39.063 [2024-05-16 18:36:41.387619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387672] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdc5cf0 was disconnected and freed. reset controller. 00:15:39.063 [2024-05-16 18:36:41.387689] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:39.063 [2024-05-16 18:36:41.387748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.063 [2024-05-16 18:36:41.387768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.063 [2024-05-16 18:36:41.387793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.063 [2024-05-16 18:36:41.387818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.063 [2024-05-16 18:36:41.387857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:41.387869] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:39.063 [2024-05-16 18:36:41.387916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd44020 (9): Bad file descriptor 00:15:39.063 [2024-05-16 18:36:41.391429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:39.063 [2024-05-16 18:36:41.422896] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:39.063 [2024-05-16 18:36:45.913666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.063 [2024-05-16 18:36:45.913978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.913992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:45.914005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.914018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:45.914030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.914044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:45.914056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.914070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.063 [2024-05-16 18:36:45.914083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.063 [2024-05-16 18:36:45.914096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.064 [2024-05-16 18:36:45.914610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.914982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.914994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.064 [2024-05-16 18:36:45.915308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.064 [2024-05-16 18:36:45.915321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.065 [2024-05-16 18:36:45.915820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.915976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.915989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.065 [2024-05-16 18:36:45.916244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.065 [2024-05-16 18:36:45.916256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:39.066 [2024-05-16 18:36:45.916469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:39.066 [2024-05-16 18:36:45.916927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.916941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc59b0 is same with the state(5) to be set 00:15:39.066 [2024-05-16 18:36:45.916957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.916968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.916978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1800 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.916991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2128 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2136 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2152 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2160 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2168 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2184 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2192 len:8 PRP1 0x0 PRP2 0x0 00:15:39.066 [2024-05-16 18:36:45.917435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.066 [2024-05-16 18:36:45.917448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.066 [2024-05-16 18:36:45.917456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.066 [2024-05-16 18:36:45.917466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2200 len:8 PRP1 0x0 PRP2 0x0 00:15:39.067 [2024-05-16 18:36:45.917477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.067 [2024-05-16 18:36:45.917498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.067 [2024-05-16 18:36:45.917507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:8 PRP1 0x0 PRP2 0x0 00:15:39.067 [2024-05-16 18:36:45.917518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917530] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.067 [2024-05-16 18:36:45.917540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.067 [2024-05-16 18:36:45.917550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2216 len:8 PRP1 0x0 PRP2 0x0 00:15:39.067 [2024-05-16 18:36:45.917562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.067 [2024-05-16 18:36:45.917583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.067 [2024-05-16 18:36:45.917598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2224 len:8 PRP1 0x0 PRP2 0x0 00:15:39.067 [2024-05-16 18:36:45.917611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.067 [2024-05-16 18:36:45.917632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.067 [2024-05-16 18:36:45.917642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2232 len:8 PRP1 0x0 PRP2 0x0 00:15:39.067 [2024-05-16 18:36:45.917654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.067 [2024-05-16 18:36:45.917676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.067 [2024-05-16 18:36:45.917685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:8 PRP1 0x0 PRP2 0x0 00:15:39.067 [2024-05-16 18:36:45.917697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:39.067 [2024-05-16 18:36:45.917724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:39.067 [2024-05-16 18:36:45.917733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2248 len:8 PRP1 0x0 PRP2 0x0 00:15:39.067 [2024-05-16 18:36:45.917745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917801] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdc59b0 was disconnected and freed. reset controller. 00:15:39.067 [2024-05-16 18:36:45.917817] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:39.067 [2024-05-16 18:36:45.917884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.067 [2024-05-16 18:36:45.917904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.067 [2024-05-16 18:36:45.917930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.067 [2024-05-16 18:36:45.917956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:39.067 [2024-05-16 18:36:45.917981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:39.067 [2024-05-16 18:36:45.917993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:39.067 [2024-05-16 18:36:45.921605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:39.067 [2024-05-16 18:36:45.921645] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd44020 (9): Bad file descriptor 00:15:39.067 [2024-05-16 18:36:45.953246] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:39.067 00:15:39.067 Latency(us) 00:15:39.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.067 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:39.067 Verification LBA range: start 0x0 length 0x4000 00:15:39.067 NVMe0n1 : 15.01 8339.22 32.58 210.87 0.00 14938.21 580.89 18945.86 00:15:39.067 =================================================================================================================== 00:15:39.067 Total : 8339.22 32.58 210.87 0.00 14938.21 580.89 18945.86 00:15:39.067 Received shutdown signal, test time was about 15.000000 seconds 00:15:39.067 00:15:39.067 Latency(us) 00:15:39.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.067 =================================================================================================================== 00:15:39.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:39.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76024 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76024 /var/tmp/bdevperf.sock 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 76024 ']' 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:39.067 18:36:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:39.635 18:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:39.635 18:36:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:15:39.635 18:36:52 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:39.894 [2024-05-16 18:36:53.156622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:39.894 18:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:39.894 [2024-05-16 18:36:53.388911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:40.152 18:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:40.411 NVMe0n1 00:15:40.411 18:36:53 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:40.671 00:15:40.930 18:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:41.189 00:15:41.189 18:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:41.189 18:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:41.449 18:36:54 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:41.708 18:36:55 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:44.993 18:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:44.993 18:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:44.993 18:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76107 00:15:44.993 18:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:44.993 18:36:58 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76107 00:15:45.929 0 00:15:46.188 18:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:46.188 [2024-05-16 18:36:51.938889] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:15:46.188 [2024-05-16 18:36:51.940053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76024 ] 00:15:46.188 [2024-05-16 18:36:52.083139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.188 [2024-05-16 18:36:52.169494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.188 [2024-05-16 18:36:52.223840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:46.188 [2024-05-16 18:36:54.984634] bdev_nvme.c:1867:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:46.188 [2024-05-16 18:36:54.984789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.188 [2024-05-16 18:36:54.984814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.188 [2024-05-16 18:36:54.984846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.188 [2024-05-16 18:36:54.984861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.188 [2024-05-16 18:36:54.984876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.188 [2024-05-16 18:36:54.984889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.188 [2024-05-16 18:36:54.984930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:46.188 [2024-05-16 18:36:54.984944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:46.188 [2024-05-16 18:36:54.984959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:46.188 [2024-05-16 18:36:54.985018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:46.188 [2024-05-16 18:36:54.985051] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c3020 (9): Bad file descriptor 00:15:46.188 [2024-05-16 18:36:54.992352] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:46.188 Running I/O for 1 seconds... 00:15:46.188 00:15:46.188 Latency(us) 00:15:46.188 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.188 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:46.188 Verification LBA range: start 0x0 length 0x4000 00:15:46.188 NVMe0n1 : 1.02 5960.38 23.28 0.00 0.00 21380.84 2338.44 18350.08 00:15:46.188 =================================================================================================================== 00:15:46.188 Total : 5960.38 23.28 0.00 0.00 21380.84 2338.44 18350.08 00:15:46.188 18:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:46.188 18:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:46.447 18:36:59 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:46.706 18:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:46.706 18:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:46.964 18:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:47.222 18:37:00 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76024 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 76024 ']' 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 76024 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76024 00:15:50.558 killing process with pid 76024 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76024' 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 76024 00:15:50.558 18:37:03 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 76024 00:15:50.815 18:37:04 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:50.815 18:37:04 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:51.073 rmmod nvme_tcp 00:15:51.073 rmmod nvme_fabrics 00:15:51.073 rmmod nvme_keyring 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 75766 ']' 00:15:51.073 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 75766 00:15:51.074 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 75766 ']' 00:15:51.074 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 75766 00:15:51.074 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:15:51.074 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:51.074 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75766 00:15:51.332 killing process with pid 75766 00:15:51.332 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:51.332 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:51.332 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75766' 00:15:51.332 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 75766 00:15:51.332 [2024-05-16 18:37:04.583792] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:51.332 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 75766 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:51.590 00:15:51.590 real 0m33.472s 00:15:51.590 user 2m9.873s 00:15:51.590 sys 0m5.667s 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:51.590 18:37:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:51.590 ************************************ 00:15:51.590 END TEST nvmf_failover 00:15:51.590 ************************************ 00:15:51.590 18:37:04 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:51.590 18:37:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:51.590 18:37:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.590 18:37:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.590 ************************************ 00:15:51.590 START TEST nvmf_host_discovery 00:15:51.590 ************************************ 00:15:51.590 18:37:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:51.590 * Looking for test storage... 00:15:51.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.590 18:37:05 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:51.591 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:51.849 Cannot find device "nvmf_tgt_br" 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.849 Cannot find device "nvmf_tgt_br2" 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:51.849 Cannot find device "nvmf_tgt_br" 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:51.849 Cannot find device "nvmf_tgt_br2" 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.849 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.849 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.850 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:52.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:15:52.108 00:15:52.108 --- 10.0.0.2 ping statistics --- 00:15:52.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.108 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:52.108 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.108 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:15:52.108 00:15:52.108 --- 10.0.0.3 ping statistics --- 00:15:52.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.108 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:52.108 00:15:52.108 --- 10.0.0.1 ping statistics --- 00:15:52.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.108 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76377 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76377 00:15:52.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 76377 ']' 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:52.108 18:37:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 [2024-05-16 18:37:05.458895] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:15:52.108 [2024-05-16 18:37:05.459273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.108 [2024-05-16 18:37:05.600079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.368 [2024-05-16 18:37:05.701640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.368 [2024-05-16 18:37:05.701996] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.368 [2024-05-16 18:37:05.702256] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.368 [2024-05-16 18:37:05.702275] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.368 [2024-05-16 18:37:05.702282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.368 [2024-05-16 18:37:05.702317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.368 [2024-05-16 18:37:05.755252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.301 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 [2024-05-16 18:37:06.523256] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 [2024-05-16 18:37:06.531166] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:53.302 [2024-05-16 18:37:06.531425] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 null0 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 null1 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76409 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76409 /tmp/host.sock 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 76409 ']' 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:53.302 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:53.302 18:37:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:53.302 [2024-05-16 18:37:06.621360] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:15:53.302 [2024-05-16 18:37:06.621813] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76409 ] 00:15:53.302 [2024-05-16 18:37:06.763997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.560 [2024-05-16 18:37:06.920475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.560 [2024-05-16 18:37:06.993092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:54.127 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:54.127 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:15:54.127 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:54.127 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:54.127 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.127 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.127 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.128 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.390 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.649 [2024-05-16 18:37:07.971776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:54.649 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.650 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.650 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.650 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.650 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.650 18:37:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.650 18:37:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.650 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:15:54.908 18:37:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:15:55.167 [2024-05-16 18:37:08.606405] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:55.167 [2024-05-16 18:37:08.607946] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:55.167 [2024-05-16 18:37:08.607993] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:55.167 [2024-05-16 18:37:08.613075] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:15:55.425 [2024-05-16 18:37:08.670038] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:55.425 [2024-05-16 18:37:08.670362] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.995 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:55.996 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:55.997 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.257 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.257 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.257 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.257 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:56.257 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:56.257 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 [2024-05-16 18:37:09.594707] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:56.258 [2024-05-16 18:37:09.595846] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:56.258 [2024-05-16 18:37:09.596045] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:56.258 [2024-05-16 18:37:09.601806] bdev_nvme.c:6902:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:56.258 [2024-05-16 18:37:09.659184] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:56.258 [2024-05-16 18:37:09.659380] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:15:56.258 [2024-05-16 18:37:09.659488] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:56.258 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.516 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.517 [2024-05-16 18:37:09.839238] bdev_nvme.c:6960:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:15:56.517 [2024-05-16 18:37:09.839292] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:56.517 [2024-05-16 18:37:09.845183] bdev_nvme.c:6765:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:15:56.517 [2024-05-16 18:37:09.845217] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:56.517 [2024-05-16 18:37:09.845352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.517 [2024-05-16 18:37:09.845402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.517 [2024-05-16 18:37:09.845417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.517 [2024-05-16 18:37:09.845427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.517 [2024-05-16 18:37:09.845437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.517 [2024-05-16 18:37:09.845447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.517 [2024-05-16 18:37:09.845457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:56.517 [2024-05-16 18:37:09.845466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:56.517 [2024-05-16 18:37:09.845476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410800 is same with the state(5) to be set 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.517 18:37:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:56.776 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.035 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:57.035 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:57.035 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:15:57.035 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:15:57.035 18:37:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.035 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.035 18:37:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.971 [2024-05-16 18:37:11.303150] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:15:57.971 [2024-05-16 18:37:11.303238] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:15:57.971 [2024-05-16 18:37:11.303260] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:15:57.971 [2024-05-16 18:37:11.309193] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:15:57.971 [2024-05-16 18:37:11.369342] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:15:57.971 [2024-05-16 18:37:11.369416] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.971 request: 00:15:57.971 { 00:15:57.971 "name": "nvme", 00:15:57.971 "trtype": "tcp", 00:15:57.971 "traddr": "10.0.0.2", 00:15:57.971 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:57.971 "adrfam": "ipv4", 00:15:57.971 "trsvcid": "8009", 00:15:57.971 "wait_for_attach": true, 00:15:57.971 "method": "bdev_nvme_start_discovery", 00:15:57.971 "req_id": 1 00:15:57.971 } 00:15:57.971 Got JSON-RPC error response 00:15:57.971 response: 00:15:57.971 { 00:15:57.971 "code": -17, 00:15:57.971 "message": "File exists" 00:15:57.971 } 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:57.971 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.229 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.229 request: 00:15:58.229 { 00:15:58.229 "name": "nvme_second", 00:15:58.229 "trtype": "tcp", 00:15:58.229 "traddr": "10.0.0.2", 00:15:58.229 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:58.229 "adrfam": "ipv4", 00:15:58.229 "trsvcid": "8009", 00:15:58.229 "wait_for_attach": true, 00:15:58.229 "method": "bdev_nvme_start_discovery", 00:15:58.229 "req_id": 1 00:15:58.229 } 00:15:58.229 Got JSON-RPC error response 00:15:58.229 response: 00:15:58.229 { 00:15:58.230 "code": -17, 00:15:58.230 "message": "File exists" 00:15:58.230 } 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.230 18:37:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:59.605 [2024-05-16 18:37:12.666665] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:59.605 [2024-05-16 18:37:12.666762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14aad20 with addr=10.0.0.2, port=8010 00:15:59.605 [2024-05-16 18:37:12.666795] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:59.605 [2024-05-16 18:37:12.666808] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:59.605 [2024-05-16 18:37:12.666837] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:00.171 [2024-05-16 18:37:13.666641] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:00.171 [2024-05-16 18:37:13.666745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14aad20 with addr=10.0.0.2, port=8010 00:16:00.171 [2024-05-16 18:37:13.666778] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:00.171 [2024-05-16 18:37:13.666790] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:00.171 [2024-05-16 18:37:13.666801] bdev_nvme.c:7040:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:01.547 [2024-05-16 18:37:14.666426] bdev_nvme.c:7021:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:01.547 request: 00:16:01.547 { 00:16:01.547 "name": "nvme_second", 00:16:01.547 "trtype": "tcp", 00:16:01.547 "traddr": "10.0.0.2", 00:16:01.547 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:01.547 "adrfam": "ipv4", 00:16:01.547 "trsvcid": "8010", 00:16:01.547 "attach_timeout_ms": 3000, 00:16:01.547 "method": "bdev_nvme_start_discovery", 00:16:01.547 "req_id": 1 00:16:01.547 } 00:16:01.547 Got JSON-RPC error response 00:16:01.547 response: 00:16:01.547 { 00:16:01.547 "code": -110, 00:16:01.547 "message": "Connection timed out" 00:16:01.547 } 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76409 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.547 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.548 rmmod nvme_tcp 00:16:01.548 rmmod nvme_fabrics 00:16:01.548 rmmod nvme_keyring 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76377 ']' 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76377 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 76377 ']' 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 76377 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76377 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:01.548 killing process with pid 76377 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76377' 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 76377 00:16:01.548 [2024-05-16 18:37:14.854345] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:01.548 18:37:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 76377 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:01.807 00:16:01.807 real 0m10.202s 00:16:01.807 user 0m19.782s 00:16:01.807 sys 0m1.998s 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:01.807 ************************************ 00:16:01.807 END TEST nvmf_host_discovery 00:16:01.807 ************************************ 00:16:01.807 18:37:15 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:01.807 18:37:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:01.807 18:37:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:01.807 18:37:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.807 ************************************ 00:16:01.807 START TEST nvmf_host_multipath_status 00:16:01.807 ************************************ 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:01.807 * Looking for test storage... 00:16:01.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.807 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:01.808 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:02.067 Cannot find device "nvmf_tgt_br" 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.067 Cannot find device "nvmf_tgt_br2" 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:02.067 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:02.068 Cannot find device "nvmf_tgt_br" 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:02.068 Cannot find device "nvmf_tgt_br2" 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.068 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:02.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:16:02.327 00:16:02.327 --- 10.0.0.2 ping statistics --- 00:16:02.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.327 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:02.327 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.327 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:16:02.327 00:16:02.327 --- 10.0.0.3 ping statistics --- 00:16:02.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.327 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:16:02.327 00:16:02.327 --- 10.0.0.1 ping statistics --- 00:16:02.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.327 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=76862 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 76862 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 76862 ']' 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:02.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:02.327 18:37:15 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:02.327 [2024-05-16 18:37:15.748937] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:16:02.327 [2024-05-16 18:37:15.749575] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.586 [2024-05-16 18:37:15.886290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:02.586 [2024-05-16 18:37:16.032496] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.586 [2024-05-16 18:37:16.032566] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.586 [2024-05-16 18:37:16.032579] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:02.586 [2024-05-16 18:37:16.032589] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:02.586 [2024-05-16 18:37:16.032596] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.586 [2024-05-16 18:37:16.032743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.586 [2024-05-16 18:37:16.032959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.844 [2024-05-16 18:37:16.105384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76862 00:16:03.410 18:37:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:03.668 [2024-05-16 18:37:17.029650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.668 18:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:03.927 Malloc0 00:16:03.927 18:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:04.186 18:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:04.445 18:37:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.704 [2024-05-16 18:37:18.120620] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:04.704 [2024-05-16 18:37:18.121003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.704 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:04.963 [2024-05-16 18:37:18.365188] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76912 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76912 /var/tmp/bdevperf.sock 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 76912 ']' 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:04.963 18:37:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:05.903 18:37:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:05.903 18:37:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:16:05.903 18:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:06.161 18:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:06.728 Nvme0n1 00:16:06.728 18:37:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:06.987 Nvme0n1 00:16:06.987 18:37:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:06.987 18:37:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:08.890 18:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:08.890 18:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:09.148 18:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:09.406 18:37:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:10.343 18:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:10.343 18:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:10.343 18:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.343 18:37:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:10.601 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:10.601 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:10.601 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.601 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:10.860 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:10.860 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:10.860 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:10.860 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.426 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.426 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.426 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.426 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:11.684 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.684 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:11.684 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.684 18:37:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:11.684 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.684 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:11.684 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.684 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:11.943 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.943 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:11.943 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:12.510 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:12.510 18:37:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:13.885 18:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:13.885 18:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:13.885 18:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:13.885 18:37:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:13.885 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:13.885 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:13.886 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:13.886 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.145 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.145 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.145 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.145 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:14.404 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.404 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:14.404 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.404 18:37:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:14.663 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.663 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:14.663 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.663 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:14.922 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.922 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:14.922 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:14.922 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.197 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.197 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:15.197 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:15.455 18:37:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:15.712 18:37:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:16.644 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:16.645 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:16.645 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.645 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:16.903 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:16.903 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:16.903 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:16.903 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.161 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:17.161 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:17.161 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.161 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:17.420 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.420 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:17.420 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.420 18:37:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:17.986 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:18.245 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:18.245 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:18.245 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:18.503 18:37:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:18.761 18:37:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:19.697 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:19.697 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:19.697 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:19.697 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.263 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:20.521 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.521 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:20.521 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:20.521 18:37:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.779 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:20.779 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:20.779 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:20.779 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:21.074 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.074 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:21.074 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.074 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:21.333 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:21.333 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:21.333 18:37:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:21.591 18:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:21.848 18:37:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:23.221 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.479 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:23.479 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:23.479 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.479 18:37:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:23.738 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.738 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:23.738 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:23.738 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.996 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:23.996 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:23.996 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:23.996 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:24.255 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.255 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:24.255 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.255 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:24.513 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.513 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:24.513 18:37:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:24.771 18:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:25.029 18:37:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:25.963 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:25.963 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:25.963 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:25.963 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.220 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:26.220 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:26.220 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.220 18:37:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:26.822 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:27.079 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.079 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:27.079 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:27.079 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.645 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:27.645 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:27.645 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.645 18:37:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:27.903 18:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.903 18:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:28.161 18:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:28.161 18:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:28.420 18:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:28.678 18:37:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:29.612 18:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:29.612 18:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:29.612 18:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.612 18:37:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:29.871 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.871 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:29.871 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:29.871 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:30.131 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.131 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:30.131 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:30.131 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.389 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.389 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:30.389 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.389 18:37:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:30.648 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.648 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:30.648 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.648 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:30.906 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.906 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:30.906 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.906 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:31.474 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.474 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:31.474 18:37:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:31.732 18:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:31.990 18:37:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:32.951 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:32.951 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:32.951 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.951 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:33.209 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:33.209 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:33.209 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:33.209 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.468 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.468 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:33.468 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.468 18:37:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:33.727 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.727 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:33.727 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.986 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:33.986 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:33.986 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:34.244 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.244 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:34.502 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.502 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:34.502 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.502 18:37:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:34.760 18:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.760 18:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:34.760 18:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:35.018 18:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:35.277 18:37:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:36.211 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:36.211 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:36.211 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.211 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:36.470 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.470 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:36.470 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.470 18:37:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:36.728 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.728 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:36.728 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.729 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:36.987 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:36.987 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:36.987 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.987 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:37.246 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.246 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:37.246 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:37.246 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.505 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.505 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:37.505 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.505 18:37:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:37.769 18:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.769 18:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:37.769 18:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:38.027 18:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:38.285 18:37:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.661 18:37:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:39.921 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:39.921 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:39.921 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:39.921 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:40.179 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.179 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:40.179 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:40.179 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.438 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.438 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:40.438 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.438 18:37:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:40.696 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.696 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:40.696 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.696 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76912 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 76912 ']' 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 76912 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76912 00:16:40.955 killing process with pid 76912 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76912' 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 76912 00:16:40.955 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 76912 00:16:41.218 Connection closed with partial response: 00:16:41.218 00:16:41.218 00:16:41.218 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76912 00:16:41.218 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:41.218 [2024-05-16 18:37:18.442250] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:16:41.218 [2024-05-16 18:37:18.442393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76912 ] 00:16:41.218 [2024-05-16 18:37:18.577910] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.218 [2024-05-16 18:37:18.747911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:41.218 [2024-05-16 18:37:18.819466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:41.218 Running I/O for 90 seconds... 00:16:41.218 [2024-05-16 18:37:35.056048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.056475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.056967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.056989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.057002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.057038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.057086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.218 [2024-05-16 18:37:35.057126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.218 [2024-05-16 18:37:35.057457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:41.218 [2024-05-16 18:37:35.057479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.057493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.057528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.057574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.057610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.057646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.057682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.057718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.057756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.057793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.057843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.057881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.057919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.057955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.057977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.057991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.219 [2024-05-16 18:37:35.058636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.058969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.058993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.219 [2024-05-16 18:37:35.059008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:41.219 [2024-05-16 18:37:35.059030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.059045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.059081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.059118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.059154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.059205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.059241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.059278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.059726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.059777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.059834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.059881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.059942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.059970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.059985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.060415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.060954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.060990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.061006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.061035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.061050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.061078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.061093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.061137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.220 [2024-05-16 18:37:35.061156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:41.220 [2024-05-16 18:37:35.061185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.220 [2024-05-16 18:37:35.061200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:35.061243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:35.061286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:35.061329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:35.061371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:35.061424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:35.061469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:35.061512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:35.061870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:35.061885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.720115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.720206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:115096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:115112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.720643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:115224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:115256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.720785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.720853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.720891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.720928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.720966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.720987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.721002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.721023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.221 [2024-05-16 18:37:51.721038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.721060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.721074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:41.221 [2024-05-16 18:37:51.721095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:115296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.221 [2024-05-16 18:37:51.721110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:115328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.721835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:115472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.721968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.721982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.722004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.722018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.724036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.724073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.724104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:41.222 [2024-05-16 18:37:51.724120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.724142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.724157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.724179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.724194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:41.222 [2024-05-16 18:37:51.724216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:41.222 [2024-05-16 18:37:51.724245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:41.222 Received shutdown signal, test time was about 33.980453 seconds 00:16:41.222 00:16:41.222 Latency(us) 00:16:41.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.222 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:41.222 Verification LBA range: start 0x0 length 0x4000 00:16:41.222 Nvme0n1 : 33.98 7878.16 30.77 0.00 0.00 16214.14 376.09 4026531.84 00:16:41.222 =================================================================================================================== 00:16:41.222 Total : 7878.16 30.77 0.00 0.00 16214.14 376.09 4026531.84 00:16:41.222 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:41.482 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:41.482 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:41.741 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:41.741 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:41.741 18:37:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:41.741 rmmod nvme_tcp 00:16:41.741 rmmod nvme_fabrics 00:16:41.741 rmmod nvme_keyring 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 76862 ']' 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 76862 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 76862 ']' 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 76862 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76862 00:16:41.741 killing process with pid 76862 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76862' 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 76862 00:16:41.741 [2024-05-16 18:37:55.123648] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:41.741 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 76862 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:41.999 ************************************ 00:16:41.999 END TEST nvmf_host_multipath_status 00:16:41.999 ************************************ 00:16:41.999 00:16:41.999 real 0m40.196s 00:16:41.999 user 2m9.775s 00:16:41.999 sys 0m11.968s 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:41.999 18:37:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:41.999 18:37:55 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:41.999 18:37:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:41.999 18:37:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:41.999 18:37:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.999 ************************************ 00:16:41.999 START TEST nvmf_discovery_remove_ifc 00:16:41.999 ************************************ 00:16:41.999 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:42.258 * Looking for test storage... 00:16:42.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:42.258 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:42.259 Cannot find device "nvmf_tgt_br" 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:42.259 Cannot find device "nvmf_tgt_br2" 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:42.259 Cannot find device "nvmf_tgt_br" 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:42.259 Cannot find device "nvmf_tgt_br2" 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:42.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:42.259 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:42.259 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:42.260 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:42.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:16:42.518 00:16:42.518 --- 10.0.0.2 ping statistics --- 00:16:42.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.518 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:42.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:42.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:16:42.518 00:16:42.518 --- 10.0.0.3 ping statistics --- 00:16:42.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.518 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:16:42.518 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:42.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:42.518 00:16:42.518 --- 10.0.0.1 ping statistics --- 00:16:42.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.519 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:42.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77699 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77699 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 77699 ']' 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:42.519 18:37:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:42.519 [2024-05-16 18:37:56.016712] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:16:42.519 [2024-05-16 18:37:56.017157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.778 [2024-05-16 18:37:56.157682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.778 [2024-05-16 18:37:56.266994] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.778 [2024-05-16 18:37:56.267249] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.778 [2024-05-16 18:37:56.267410] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.778 [2024-05-16 18:37:56.267562] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.778 [2024-05-16 18:37:56.267604] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.778 [2024-05-16 18:37:56.267740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.067 [2024-05-16 18:37:56.324011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:43.633 18:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:43.633 18:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:16:43.633 18:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.633 18:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.633 18:37:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.633 [2024-05-16 18:37:57.030950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.633 [2024-05-16 18:37:57.038880] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:43.633 [2024-05-16 18:37:57.039107] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:43.633 null0 00:16:43.633 [2024-05-16 18:37:57.071002] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.633 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77728 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77728 /tmp/host.sock 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 77728 ']' 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:43.633 18:37:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:43.891 [2024-05-16 18:37:57.148042] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:16:43.891 [2024-05-16 18:37:57.148387] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77728 ] 00:16:43.891 [2024-05-16 18:37:57.287687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.150 [2024-05-16 18:37:57.457636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.718 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:44.977 [2024-05-16 18:37:58.250881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:44.977 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.977 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:44.977 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.977 18:37:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.910 [2024-05-16 18:37:59.314770] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:45.910 [2024-05-16 18:37:59.314867] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:45.910 [2024-05-16 18:37:59.314912] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:45.910 [2024-05-16 18:37:59.320897] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:45.910 [2024-05-16 18:37:59.378843] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:45.910 [2024-05-16 18:37:59.378934] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:45.910 [2024-05-16 18:37:59.378976] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:45.910 [2024-05-16 18:37:59.379001] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:45.910 [2024-05-16 18:37:59.379035] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:45.910 [2024-05-16 18:37:59.382955] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x974380 was disconnected and freed. delete nvme_qpair. 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:45.910 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:46.169 18:37:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:47.104 18:38:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:48.488 18:38:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:49.423 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:49.423 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:49.424 18:38:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:50.359 18:38:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:51.294 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.553 [2024-05-16 18:38:04.805353] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:51.553 [2024-05-16 18:38:04.805730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.553 [2024-05-16 18:38:04.805990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.553 [2024-05-16 18:38:04.806212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.553 [2024-05-16 18:38:04.806228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.553 [2024-05-16 18:38:04.806240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.553 [2024-05-16 18:38:04.806256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.553 [2024-05-16 18:38:04.806267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.553 [2024-05-16 18:38:04.806277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.553 [2024-05-16 18:38:04.806289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:51.553 [2024-05-16 18:38:04.806299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.553 [2024-05-16 18:38:04.806309] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f840 is same with the state(5) to be set 00:16:51.553 [2024-05-16 18:38:04.815340] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f840 (9): Bad file descriptor 00:16:51.553 [2024-05-16 18:38:04.825369] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:51.553 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:51.553 18:38:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:52.490 [2024-05-16 18:38:05.858902] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:52.490 [2024-05-16 18:38:05.859049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x94f840 with addr=10.0.0.2, port=4420 00:16:52.490 [2024-05-16 18:38:05.859085] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94f840 is same with the state(5) to be set 00:16:52.490 [2024-05-16 18:38:05.859231] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94f840 (9): Bad file descriptor 00:16:52.490 [2024-05-16 18:38:05.859748] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:16:52.490 [2024-05-16 18:38:05.859787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:52.490 [2024-05-16 18:38:05.859806] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:52.490 [2024-05-16 18:38:05.859874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:52.490 [2024-05-16 18:38:05.859941] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:52.490 [2024-05-16 18:38:05.859963] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:52.490 18:38:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:53.427 [2024-05-16 18:38:06.860049] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:53.427 [2024-05-16 18:38:06.860155] bdev_nvme.c:6729:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:53.427 [2024-05-16 18:38:06.860230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.427 [2024-05-16 18:38:06.860249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.427 [2024-05-16 18:38:06.860266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.427 [2024-05-16 18:38:06.860286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.427 [2024-05-16 18:38:06.860296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.427 [2024-05-16 18:38:06.860307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.427 [2024-05-16 18:38:06.860318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.427 [2024-05-16 18:38:06.860328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.427 [2024-05-16 18:38:06.860339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.427 [2024-05-16 18:38:06.860348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.427 [2024-05-16 18:38:06.860360] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:53.427 [2024-05-16 18:38:06.860403] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8de910 (9): Bad file descriptor 00:16:53.427 [2024-05-16 18:38:06.861394] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:53.427 [2024-05-16 18:38:06.861418] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.427 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:53.686 18:38:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.686 18:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:53.686 18:38:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:54.659 18:38:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:55.595 [2024-05-16 18:38:08.869284] bdev_nvme.c:6978:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:55.595 [2024-05-16 18:38:08.869337] bdev_nvme.c:7058:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:55.595 [2024-05-16 18:38:08.869361] bdev_nvme.c:6941:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:55.595 [2024-05-16 18:38:08.875329] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:55.595 [2024-05-16 18:38:08.931192] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:55.595 [2024-05-16 18:38:08.931499] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:55.595 [2024-05-16 18:38:08.931571] bdev_nvme.c:7768:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:55.595 [2024-05-16 18:38:08.931686] bdev_nvme.c:6797:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:55.595 [2024-05-16 18:38:08.931753] bdev_nvme.c:6756:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:55.595 [2024-05-16 18:38:08.937989] bdev_nvme.c:1614:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x980640 was disconnected and freed. delete nvme_qpair. 00:16:55.595 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.595 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.595 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.595 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.595 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.595 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.595 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77728 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 77728 ']' 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 77728 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77728 00:16:55.853 killing process with pid 77728 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77728' 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 77728 00:16:55.853 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 77728 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:56.111 rmmod nvme_tcp 00:16:56.111 rmmod nvme_fabrics 00:16:56.111 rmmod nvme_keyring 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77699 ']' 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77699 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 77699 ']' 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 77699 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77699 00:16:56.111 killing process with pid 77699 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77699' 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 77699 00:16:56.111 [2024-05-16 18:38:09.575974] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:56.111 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 77699 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:56.369 ************************************ 00:16:56.369 END TEST nvmf_discovery_remove_ifc 00:16:56.369 ************************************ 00:16:56.369 00:16:56.369 real 0m14.381s 00:16:56.369 user 0m24.879s 00:16:56.369 sys 0m2.570s 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:56.369 18:38:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.628 18:38:09 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:56.628 18:38:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:56.628 18:38:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:56.628 18:38:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.628 ************************************ 00:16:56.628 START TEST nvmf_identify_kernel_target 00:16:56.628 ************************************ 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:56.628 * Looking for test storage... 00:16:56.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.628 18:38:09 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:56.628 Cannot find device "nvmf_tgt_br" 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.628 Cannot find device "nvmf_tgt_br2" 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:56.628 Cannot find device "nvmf_tgt_br" 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:56.628 Cannot find device "nvmf_tgt_br2" 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:56.628 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:56.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:16:56.887 00:16:56.887 --- 10.0.0.2 ping statistics --- 00:16:56.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.887 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:56.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:16:56.887 00:16:56.887 --- 10.0.0.3 ping statistics --- 00:16:56.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.887 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:16:56.887 00:16:56.887 --- 10.0.0.1 ping statistics --- 00:16:56.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.887 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:16:56.887 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:16:57.146 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:57.146 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:57.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:57.404 Waiting for block devices as requested 00:16:57.404 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:57.663 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:57.663 18:38:10 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:57.663 No valid GPT data, bailing 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:57.663 No valid GPT data, bailing 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:16:57.663 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:16:57.664 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:57.664 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:57.664 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:16:57.664 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:57.664 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:57.923 No valid GPT data, bailing 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:57.923 No valid GPT data, bailing 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -a 10.0.0.1 -t tcp -s 4420 00:16:57.923 00:16:57.923 Discovery Log Number of Records 2, Generation counter 2 00:16:57.923 =====Discovery Log Entry 0====== 00:16:57.923 trtype: tcp 00:16:57.923 adrfam: ipv4 00:16:57.923 subtype: current discovery subsystem 00:16:57.923 treq: not specified, sq flow control disable supported 00:16:57.923 portid: 1 00:16:57.923 trsvcid: 4420 00:16:57.923 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:57.923 traddr: 10.0.0.1 00:16:57.923 eflags: none 00:16:57.923 sectype: none 00:16:57.923 =====Discovery Log Entry 1====== 00:16:57.923 trtype: tcp 00:16:57.923 adrfam: ipv4 00:16:57.923 subtype: nvme subsystem 00:16:57.923 treq: not specified, sq flow control disable supported 00:16:57.923 portid: 1 00:16:57.923 trsvcid: 4420 00:16:57.923 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:57.923 traddr: 10.0.0.1 00:16:57.923 eflags: none 00:16:57.923 sectype: none 00:16:57.923 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:57.923 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:58.246 ===================================================== 00:16:58.246 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:58.246 ===================================================== 00:16:58.246 Controller Capabilities/Features 00:16:58.246 ================================ 00:16:58.246 Vendor ID: 0000 00:16:58.246 Subsystem Vendor ID: 0000 00:16:58.246 Serial Number: d4472181cb8f02c94cf4 00:16:58.246 Model Number: Linux 00:16:58.246 Firmware Version: 6.7.0-68 00:16:58.246 Recommended Arb Burst: 0 00:16:58.246 IEEE OUI Identifier: 00 00 00 00:16:58.246 Multi-path I/O 00:16:58.246 May have multiple subsystem ports: No 00:16:58.246 May have multiple controllers: No 00:16:58.246 Associated with SR-IOV VF: No 00:16:58.246 Max Data Transfer Size: Unlimited 00:16:58.246 Max Number of Namespaces: 0 00:16:58.246 Max Number of I/O Queues: 1024 00:16:58.246 NVMe Specification Version (VS): 1.3 00:16:58.246 NVMe Specification Version (Identify): 1.3 00:16:58.246 Maximum Queue Entries: 1024 00:16:58.246 Contiguous Queues Required: No 00:16:58.246 Arbitration Mechanisms Supported 00:16:58.246 Weighted Round Robin: Not Supported 00:16:58.246 Vendor Specific: Not Supported 00:16:58.246 Reset Timeout: 7500 ms 00:16:58.246 Doorbell Stride: 4 bytes 00:16:58.246 NVM Subsystem Reset: Not Supported 00:16:58.246 Command Sets Supported 00:16:58.246 NVM Command Set: Supported 00:16:58.246 Boot Partition: Not Supported 00:16:58.246 Memory Page Size Minimum: 4096 bytes 00:16:58.246 Memory Page Size Maximum: 4096 bytes 00:16:58.246 Persistent Memory Region: Not Supported 00:16:58.246 Optional Asynchronous Events Supported 00:16:58.246 Namespace Attribute Notices: Not Supported 00:16:58.246 Firmware Activation Notices: Not Supported 00:16:58.246 ANA Change Notices: Not Supported 00:16:58.246 PLE Aggregate Log Change Notices: Not Supported 00:16:58.246 LBA Status Info Alert Notices: Not Supported 00:16:58.246 EGE Aggregate Log Change Notices: Not Supported 00:16:58.246 Normal NVM Subsystem Shutdown event: Not Supported 00:16:58.246 Zone Descriptor Change Notices: Not Supported 00:16:58.246 Discovery Log Change Notices: Supported 00:16:58.246 Controller Attributes 00:16:58.246 128-bit Host Identifier: Not Supported 00:16:58.246 Non-Operational Permissive Mode: Not Supported 00:16:58.246 NVM Sets: Not Supported 00:16:58.246 Read Recovery Levels: Not Supported 00:16:58.246 Endurance Groups: Not Supported 00:16:58.246 Predictable Latency Mode: Not Supported 00:16:58.246 Traffic Based Keep ALive: Not Supported 00:16:58.246 Namespace Granularity: Not Supported 00:16:58.246 SQ Associations: Not Supported 00:16:58.246 UUID List: Not Supported 00:16:58.246 Multi-Domain Subsystem: Not Supported 00:16:58.246 Fixed Capacity Management: Not Supported 00:16:58.246 Variable Capacity Management: Not Supported 00:16:58.246 Delete Endurance Group: Not Supported 00:16:58.246 Delete NVM Set: Not Supported 00:16:58.246 Extended LBA Formats Supported: Not Supported 00:16:58.246 Flexible Data Placement Supported: Not Supported 00:16:58.246 00:16:58.246 Controller Memory Buffer Support 00:16:58.246 ================================ 00:16:58.246 Supported: No 00:16:58.246 00:16:58.246 Persistent Memory Region Support 00:16:58.246 ================================ 00:16:58.246 Supported: No 00:16:58.246 00:16:58.246 Admin Command Set Attributes 00:16:58.246 ============================ 00:16:58.246 Security Send/Receive: Not Supported 00:16:58.246 Format NVM: Not Supported 00:16:58.246 Firmware Activate/Download: Not Supported 00:16:58.246 Namespace Management: Not Supported 00:16:58.246 Device Self-Test: Not Supported 00:16:58.246 Directives: Not Supported 00:16:58.246 NVMe-MI: Not Supported 00:16:58.246 Virtualization Management: Not Supported 00:16:58.246 Doorbell Buffer Config: Not Supported 00:16:58.246 Get LBA Status Capability: Not Supported 00:16:58.246 Command & Feature Lockdown Capability: Not Supported 00:16:58.246 Abort Command Limit: 1 00:16:58.246 Async Event Request Limit: 1 00:16:58.246 Number of Firmware Slots: N/A 00:16:58.246 Firmware Slot 1 Read-Only: N/A 00:16:58.246 Firmware Activation Without Reset: N/A 00:16:58.246 Multiple Update Detection Support: N/A 00:16:58.246 Firmware Update Granularity: No Information Provided 00:16:58.246 Per-Namespace SMART Log: No 00:16:58.246 Asymmetric Namespace Access Log Page: Not Supported 00:16:58.246 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:58.246 Command Effects Log Page: Not Supported 00:16:58.246 Get Log Page Extended Data: Supported 00:16:58.246 Telemetry Log Pages: Not Supported 00:16:58.246 Persistent Event Log Pages: Not Supported 00:16:58.246 Supported Log Pages Log Page: May Support 00:16:58.246 Commands Supported & Effects Log Page: Not Supported 00:16:58.246 Feature Identifiers & Effects Log Page:May Support 00:16:58.246 NVMe-MI Commands & Effects Log Page: May Support 00:16:58.246 Data Area 4 for Telemetry Log: Not Supported 00:16:58.246 Error Log Page Entries Supported: 1 00:16:58.246 Keep Alive: Not Supported 00:16:58.246 00:16:58.246 NVM Command Set Attributes 00:16:58.246 ========================== 00:16:58.246 Submission Queue Entry Size 00:16:58.247 Max: 1 00:16:58.247 Min: 1 00:16:58.247 Completion Queue Entry Size 00:16:58.247 Max: 1 00:16:58.247 Min: 1 00:16:58.247 Number of Namespaces: 0 00:16:58.247 Compare Command: Not Supported 00:16:58.247 Write Uncorrectable Command: Not Supported 00:16:58.247 Dataset Management Command: Not Supported 00:16:58.247 Write Zeroes Command: Not Supported 00:16:58.247 Set Features Save Field: Not Supported 00:16:58.247 Reservations: Not Supported 00:16:58.247 Timestamp: Not Supported 00:16:58.247 Copy: Not Supported 00:16:58.247 Volatile Write Cache: Not Present 00:16:58.247 Atomic Write Unit (Normal): 1 00:16:58.247 Atomic Write Unit (PFail): 1 00:16:58.247 Atomic Compare & Write Unit: 1 00:16:58.247 Fused Compare & Write: Not Supported 00:16:58.247 Scatter-Gather List 00:16:58.247 SGL Command Set: Supported 00:16:58.247 SGL Keyed: Not Supported 00:16:58.247 SGL Bit Bucket Descriptor: Not Supported 00:16:58.247 SGL Metadata Pointer: Not Supported 00:16:58.247 Oversized SGL: Not Supported 00:16:58.247 SGL Metadata Address: Not Supported 00:16:58.247 SGL Offset: Supported 00:16:58.247 Transport SGL Data Block: Not Supported 00:16:58.247 Replay Protected Memory Block: Not Supported 00:16:58.247 00:16:58.247 Firmware Slot Information 00:16:58.247 ========================= 00:16:58.247 Active slot: 0 00:16:58.247 00:16:58.247 00:16:58.247 Error Log 00:16:58.247 ========= 00:16:58.247 00:16:58.247 Active Namespaces 00:16:58.247 ================= 00:16:58.247 Discovery Log Page 00:16:58.247 ================== 00:16:58.247 Generation Counter: 2 00:16:58.247 Number of Records: 2 00:16:58.247 Record Format: 0 00:16:58.247 00:16:58.247 Discovery Log Entry 0 00:16:58.247 ---------------------- 00:16:58.247 Transport Type: 3 (TCP) 00:16:58.247 Address Family: 1 (IPv4) 00:16:58.247 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:58.247 Entry Flags: 00:16:58.247 Duplicate Returned Information: 0 00:16:58.247 Explicit Persistent Connection Support for Discovery: 0 00:16:58.247 Transport Requirements: 00:16:58.247 Secure Channel: Not Specified 00:16:58.247 Port ID: 1 (0x0001) 00:16:58.247 Controller ID: 65535 (0xffff) 00:16:58.247 Admin Max SQ Size: 32 00:16:58.247 Transport Service Identifier: 4420 00:16:58.247 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:58.247 Transport Address: 10.0.0.1 00:16:58.247 Discovery Log Entry 1 00:16:58.247 ---------------------- 00:16:58.247 Transport Type: 3 (TCP) 00:16:58.247 Address Family: 1 (IPv4) 00:16:58.247 Subsystem Type: 2 (NVM Subsystem) 00:16:58.247 Entry Flags: 00:16:58.247 Duplicate Returned Information: 0 00:16:58.247 Explicit Persistent Connection Support for Discovery: 0 00:16:58.247 Transport Requirements: 00:16:58.247 Secure Channel: Not Specified 00:16:58.247 Port ID: 1 (0x0001) 00:16:58.247 Controller ID: 65535 (0xffff) 00:16:58.247 Admin Max SQ Size: 32 00:16:58.247 Transport Service Identifier: 4420 00:16:58.247 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:58.247 Transport Address: 10.0.0.1 00:16:58.247 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:58.247 get_feature(0x01) failed 00:16:58.247 get_feature(0x02) failed 00:16:58.247 get_feature(0x04) failed 00:16:58.247 ===================================================== 00:16:58.247 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:58.247 ===================================================== 00:16:58.247 Controller Capabilities/Features 00:16:58.247 ================================ 00:16:58.247 Vendor ID: 0000 00:16:58.247 Subsystem Vendor ID: 0000 00:16:58.247 Serial Number: b14794a3ad4cd06c5730 00:16:58.247 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:58.247 Firmware Version: 6.7.0-68 00:16:58.247 Recommended Arb Burst: 6 00:16:58.247 IEEE OUI Identifier: 00 00 00 00:16:58.247 Multi-path I/O 00:16:58.247 May have multiple subsystem ports: Yes 00:16:58.247 May have multiple controllers: Yes 00:16:58.247 Associated with SR-IOV VF: No 00:16:58.247 Max Data Transfer Size: Unlimited 00:16:58.247 Max Number of Namespaces: 1024 00:16:58.247 Max Number of I/O Queues: 128 00:16:58.247 NVMe Specification Version (VS): 1.3 00:16:58.247 NVMe Specification Version (Identify): 1.3 00:16:58.247 Maximum Queue Entries: 1024 00:16:58.247 Contiguous Queues Required: No 00:16:58.247 Arbitration Mechanisms Supported 00:16:58.247 Weighted Round Robin: Not Supported 00:16:58.247 Vendor Specific: Not Supported 00:16:58.247 Reset Timeout: 7500 ms 00:16:58.247 Doorbell Stride: 4 bytes 00:16:58.247 NVM Subsystem Reset: Not Supported 00:16:58.247 Command Sets Supported 00:16:58.247 NVM Command Set: Supported 00:16:58.247 Boot Partition: Not Supported 00:16:58.247 Memory Page Size Minimum: 4096 bytes 00:16:58.247 Memory Page Size Maximum: 4096 bytes 00:16:58.247 Persistent Memory Region: Not Supported 00:16:58.247 Optional Asynchronous Events Supported 00:16:58.247 Namespace Attribute Notices: Supported 00:16:58.247 Firmware Activation Notices: Not Supported 00:16:58.247 ANA Change Notices: Supported 00:16:58.247 PLE Aggregate Log Change Notices: Not Supported 00:16:58.247 LBA Status Info Alert Notices: Not Supported 00:16:58.247 EGE Aggregate Log Change Notices: Not Supported 00:16:58.247 Normal NVM Subsystem Shutdown event: Not Supported 00:16:58.247 Zone Descriptor Change Notices: Not Supported 00:16:58.247 Discovery Log Change Notices: Not Supported 00:16:58.247 Controller Attributes 00:16:58.247 128-bit Host Identifier: Supported 00:16:58.247 Non-Operational Permissive Mode: Not Supported 00:16:58.247 NVM Sets: Not Supported 00:16:58.247 Read Recovery Levels: Not Supported 00:16:58.247 Endurance Groups: Not Supported 00:16:58.247 Predictable Latency Mode: Not Supported 00:16:58.247 Traffic Based Keep ALive: Supported 00:16:58.247 Namespace Granularity: Not Supported 00:16:58.247 SQ Associations: Not Supported 00:16:58.247 UUID List: Not Supported 00:16:58.247 Multi-Domain Subsystem: Not Supported 00:16:58.247 Fixed Capacity Management: Not Supported 00:16:58.247 Variable Capacity Management: Not Supported 00:16:58.247 Delete Endurance Group: Not Supported 00:16:58.247 Delete NVM Set: Not Supported 00:16:58.247 Extended LBA Formats Supported: Not Supported 00:16:58.247 Flexible Data Placement Supported: Not Supported 00:16:58.247 00:16:58.247 Controller Memory Buffer Support 00:16:58.247 ================================ 00:16:58.247 Supported: No 00:16:58.247 00:16:58.247 Persistent Memory Region Support 00:16:58.247 ================================ 00:16:58.247 Supported: No 00:16:58.247 00:16:58.247 Admin Command Set Attributes 00:16:58.247 ============================ 00:16:58.247 Security Send/Receive: Not Supported 00:16:58.247 Format NVM: Not Supported 00:16:58.247 Firmware Activate/Download: Not Supported 00:16:58.247 Namespace Management: Not Supported 00:16:58.247 Device Self-Test: Not Supported 00:16:58.247 Directives: Not Supported 00:16:58.247 NVMe-MI: Not Supported 00:16:58.247 Virtualization Management: Not Supported 00:16:58.247 Doorbell Buffer Config: Not Supported 00:16:58.247 Get LBA Status Capability: Not Supported 00:16:58.247 Command & Feature Lockdown Capability: Not Supported 00:16:58.247 Abort Command Limit: 4 00:16:58.247 Async Event Request Limit: 4 00:16:58.247 Number of Firmware Slots: N/A 00:16:58.247 Firmware Slot 1 Read-Only: N/A 00:16:58.247 Firmware Activation Without Reset: N/A 00:16:58.247 Multiple Update Detection Support: N/A 00:16:58.247 Firmware Update Granularity: No Information Provided 00:16:58.247 Per-Namespace SMART Log: Yes 00:16:58.247 Asymmetric Namespace Access Log Page: Supported 00:16:58.247 ANA Transition Time : 10 sec 00:16:58.247 00:16:58.247 Asymmetric Namespace Access Capabilities 00:16:58.247 ANA Optimized State : Supported 00:16:58.247 ANA Non-Optimized State : Supported 00:16:58.247 ANA Inaccessible State : Supported 00:16:58.247 ANA Persistent Loss State : Supported 00:16:58.247 ANA Change State : Supported 00:16:58.247 ANAGRPID is not changed : No 00:16:58.247 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:58.247 00:16:58.247 ANA Group Identifier Maximum : 128 00:16:58.247 Number of ANA Group Identifiers : 128 00:16:58.247 Max Number of Allowed Namespaces : 1024 00:16:58.247 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:58.247 Command Effects Log Page: Supported 00:16:58.247 Get Log Page Extended Data: Supported 00:16:58.247 Telemetry Log Pages: Not Supported 00:16:58.247 Persistent Event Log Pages: Not Supported 00:16:58.247 Supported Log Pages Log Page: May Support 00:16:58.247 Commands Supported & Effects Log Page: Not Supported 00:16:58.248 Feature Identifiers & Effects Log Page:May Support 00:16:58.248 NVMe-MI Commands & Effects Log Page: May Support 00:16:58.248 Data Area 4 for Telemetry Log: Not Supported 00:16:58.248 Error Log Page Entries Supported: 128 00:16:58.248 Keep Alive: Supported 00:16:58.248 Keep Alive Granularity: 1000 ms 00:16:58.248 00:16:58.248 NVM Command Set Attributes 00:16:58.248 ========================== 00:16:58.248 Submission Queue Entry Size 00:16:58.248 Max: 64 00:16:58.248 Min: 64 00:16:58.248 Completion Queue Entry Size 00:16:58.248 Max: 16 00:16:58.248 Min: 16 00:16:58.248 Number of Namespaces: 1024 00:16:58.248 Compare Command: Not Supported 00:16:58.248 Write Uncorrectable Command: Not Supported 00:16:58.248 Dataset Management Command: Supported 00:16:58.248 Write Zeroes Command: Supported 00:16:58.248 Set Features Save Field: Not Supported 00:16:58.248 Reservations: Not Supported 00:16:58.248 Timestamp: Not Supported 00:16:58.248 Copy: Not Supported 00:16:58.248 Volatile Write Cache: Present 00:16:58.248 Atomic Write Unit (Normal): 1 00:16:58.248 Atomic Write Unit (PFail): 1 00:16:58.248 Atomic Compare & Write Unit: 1 00:16:58.248 Fused Compare & Write: Not Supported 00:16:58.248 Scatter-Gather List 00:16:58.248 SGL Command Set: Supported 00:16:58.248 SGL Keyed: Not Supported 00:16:58.248 SGL Bit Bucket Descriptor: Not Supported 00:16:58.248 SGL Metadata Pointer: Not Supported 00:16:58.248 Oversized SGL: Not Supported 00:16:58.248 SGL Metadata Address: Not Supported 00:16:58.248 SGL Offset: Supported 00:16:58.248 Transport SGL Data Block: Not Supported 00:16:58.248 Replay Protected Memory Block: Not Supported 00:16:58.248 00:16:58.248 Firmware Slot Information 00:16:58.248 ========================= 00:16:58.248 Active slot: 0 00:16:58.248 00:16:58.248 Asymmetric Namespace Access 00:16:58.248 =========================== 00:16:58.248 Change Count : 0 00:16:58.248 Number of ANA Group Descriptors : 1 00:16:58.248 ANA Group Descriptor : 0 00:16:58.248 ANA Group ID : 1 00:16:58.248 Number of NSID Values : 1 00:16:58.248 Change Count : 0 00:16:58.248 ANA State : 1 00:16:58.248 Namespace Identifier : 1 00:16:58.248 00:16:58.248 Commands Supported and Effects 00:16:58.248 ============================== 00:16:58.248 Admin Commands 00:16:58.248 -------------- 00:16:58.248 Get Log Page (02h): Supported 00:16:58.248 Identify (06h): Supported 00:16:58.248 Abort (08h): Supported 00:16:58.248 Set Features (09h): Supported 00:16:58.248 Get Features (0Ah): Supported 00:16:58.248 Asynchronous Event Request (0Ch): Supported 00:16:58.248 Keep Alive (18h): Supported 00:16:58.248 I/O Commands 00:16:58.248 ------------ 00:16:58.248 Flush (00h): Supported 00:16:58.248 Write (01h): Supported LBA-Change 00:16:58.248 Read (02h): Supported 00:16:58.248 Write Zeroes (08h): Supported LBA-Change 00:16:58.248 Dataset Management (09h): Supported 00:16:58.248 00:16:58.248 Error Log 00:16:58.248 ========= 00:16:58.248 Entry: 0 00:16:58.248 Error Count: 0x3 00:16:58.248 Submission Queue Id: 0x0 00:16:58.248 Command Id: 0x5 00:16:58.248 Phase Bit: 0 00:16:58.248 Status Code: 0x2 00:16:58.248 Status Code Type: 0x0 00:16:58.248 Do Not Retry: 1 00:16:58.248 Error Location: 0x28 00:16:58.248 LBA: 0x0 00:16:58.248 Namespace: 0x0 00:16:58.248 Vendor Log Page: 0x0 00:16:58.248 ----------- 00:16:58.248 Entry: 1 00:16:58.248 Error Count: 0x2 00:16:58.248 Submission Queue Id: 0x0 00:16:58.248 Command Id: 0x5 00:16:58.248 Phase Bit: 0 00:16:58.248 Status Code: 0x2 00:16:58.248 Status Code Type: 0x0 00:16:58.248 Do Not Retry: 1 00:16:58.248 Error Location: 0x28 00:16:58.248 LBA: 0x0 00:16:58.248 Namespace: 0x0 00:16:58.248 Vendor Log Page: 0x0 00:16:58.248 ----------- 00:16:58.248 Entry: 2 00:16:58.248 Error Count: 0x1 00:16:58.248 Submission Queue Id: 0x0 00:16:58.248 Command Id: 0x4 00:16:58.248 Phase Bit: 0 00:16:58.248 Status Code: 0x2 00:16:58.248 Status Code Type: 0x0 00:16:58.248 Do Not Retry: 1 00:16:58.248 Error Location: 0x28 00:16:58.248 LBA: 0x0 00:16:58.248 Namespace: 0x0 00:16:58.248 Vendor Log Page: 0x0 00:16:58.248 00:16:58.248 Number of Queues 00:16:58.248 ================ 00:16:58.248 Number of I/O Submission Queues: 128 00:16:58.248 Number of I/O Completion Queues: 128 00:16:58.248 00:16:58.248 ZNS Specific Controller Data 00:16:58.248 ============================ 00:16:58.248 Zone Append Size Limit: 0 00:16:58.248 00:16:58.248 00:16:58.248 Active Namespaces 00:16:58.248 ================= 00:16:58.248 get_feature(0x05) failed 00:16:58.248 Namespace ID:1 00:16:58.248 Command Set Identifier: NVM (00h) 00:16:58.248 Deallocate: Supported 00:16:58.248 Deallocated/Unwritten Error: Not Supported 00:16:58.248 Deallocated Read Value: Unknown 00:16:58.248 Deallocate in Write Zeroes: Not Supported 00:16:58.248 Deallocated Guard Field: 0xFFFF 00:16:58.248 Flush: Supported 00:16:58.248 Reservation: Not Supported 00:16:58.248 Namespace Sharing Capabilities: Multiple Controllers 00:16:58.248 Size (in LBAs): 1310720 (5GiB) 00:16:58.248 Capacity (in LBAs): 1310720 (5GiB) 00:16:58.248 Utilization (in LBAs): 1310720 (5GiB) 00:16:58.248 UUID: 156140e1-10bf-4709-aa78-330a87b63c27 00:16:58.248 Thin Provisioning: Not Supported 00:16:58.248 Per-NS Atomic Units: Yes 00:16:58.248 Atomic Boundary Size (Normal): 0 00:16:58.248 Atomic Boundary Size (PFail): 0 00:16:58.248 Atomic Boundary Offset: 0 00:16:58.248 NGUID/EUI64 Never Reused: No 00:16:58.248 ANA group ID: 1 00:16:58.248 Namespace Write Protected: No 00:16:58.248 Number of LBA Formats: 1 00:16:58.248 Current LBA Format: LBA Format #00 00:16:58.248 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:58.248 00:16:58.248 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:58.248 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:58.248 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:58.507 rmmod nvme_tcp 00:16:58.507 rmmod nvme_fabrics 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:16:58.507 18:38:11 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:59.442 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:59.442 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:59.442 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:59.442 ************************************ 00:16:59.442 END TEST nvmf_identify_kernel_target 00:16:59.442 ************************************ 00:16:59.442 00:16:59.442 real 0m2.945s 00:16:59.442 user 0m1.035s 00:16:59.442 sys 0m1.415s 00:16:59.442 18:38:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:59.442 18:38:12 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.442 18:38:12 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:59.442 18:38:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:59.442 18:38:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:59.442 18:38:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:59.442 ************************************ 00:16:59.442 START TEST nvmf_auth_host 00:16:59.442 ************************************ 00:16:59.442 18:38:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:59.702 * Looking for test storage... 00:16:59.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.702 18:38:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:59.703 18:38:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:59.703 Cannot find device "nvmf_tgt_br" 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.703 Cannot find device "nvmf_tgt_br2" 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:59.703 Cannot find device "nvmf_tgt_br" 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:59.703 Cannot find device "nvmf_tgt_br2" 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.703 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:59.703 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:59.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:16:59.962 00:16:59.962 --- 10.0.0.2 ping statistics --- 00:16:59.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.962 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:59.962 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:59.962 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:16:59.962 00:16:59.962 --- 10.0.0.3 ping statistics --- 00:16:59.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.962 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:59.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:59.962 00:16:59.962 --- 10.0.0.1 ping statistics --- 00:16:59.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.962 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78622 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78622 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 78622 ']' 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:59.962 18:38:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b32086edc89870ae11bc678567472d8e 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.u2O 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b32086edc89870ae11bc678567472d8e 0 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b32086edc89870ae11bc678567472d8e 0 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b32086edc89870ae11bc678567472d8e 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.u2O 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.u2O 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.u2O 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ffdc252a4b576f6d13609d51deed20ce1c739e381a6bc9ab3f0aaa99e6124b1e 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NOE 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ffdc252a4b576f6d13609d51deed20ce1c739e381a6bc9ab3f0aaa99e6124b1e 3 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ffdc252a4b576f6d13609d51deed20ce1c739e381a6bc9ab3f0aaa99e6124b1e 3 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ffdc252a4b576f6d13609d51deed20ce1c739e381a6bc9ab3f0aaa99e6124b1e 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NOE 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NOE 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.NOE 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4cf1671d0596d5bbac3f2e28a8b75ccf996b30aecfebcd79 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kKw 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4cf1671d0596d5bbac3f2e28a8b75ccf996b30aecfebcd79 0 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4cf1671d0596d5bbac3f2e28a8b75ccf996b30aecfebcd79 0 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4cf1671d0596d5bbac3f2e28a8b75ccf996b30aecfebcd79 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kKw 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kKw 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kKw 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4aa327ac6f32253c1645527c36ef3fb3da5fae5d369a7e66 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3DX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4aa327ac6f32253c1645527c36ef3fb3da5fae5d369a7e66 2 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4aa327ac6f32253c1645527c36ef3fb3da5fae5d369a7e66 2 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4aa327ac6f32253c1645527c36ef3fb3da5fae5d369a7e66 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3DX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3DX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.3DX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f9816068b59f1fa754766404166124f8 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.fQE 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f9816068b59f1fa754766404166124f8 1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f9816068b59f1fa754766404166124f8 1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f9816068b59f1fa754766404166124f8 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:01.335 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.fQE 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.fQE 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.fQE 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=23c06cd68a2f9862b7758816ebe64436 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yR0 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 23c06cd68a2f9862b7758816ebe64436 1 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 23c06cd68a2f9862b7758816ebe64436 1 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=23c06cd68a2f9862b7758816ebe64436 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yR0 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yR0 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.yR0 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9f3148e2b401b6efb70709a3e3ed2006fffd2fa9f78fd279 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Z58 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9f3148e2b401b6efb70709a3e3ed2006fffd2fa9f78fd279 2 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9f3148e2b401b6efb70709a3e3ed2006fffd2fa9f78fd279 2 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9f3148e2b401b6efb70709a3e3ed2006fffd2fa9f78fd279 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:01.593 18:38:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Z58 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Z58 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Z58 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:01.593 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=efda6f5730327abf34d1561af500926f 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.4cR 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key efda6f5730327abf34d1561af500926f 0 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 efda6f5730327abf34d1561af500926f 0 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=efda6f5730327abf34d1561af500926f 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.594 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.4cR 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.4cR 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4cR 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=81c955f67d81cacb3672221f895e871d3fb81949028db36d1da09d5cb916bfa3 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.shz 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 81c955f67d81cacb3672221f895e871d3fb81949028db36d1da09d5cb916bfa3 3 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 81c955f67d81cacb3672221f895e871d3fb81949028db36d1da09d5cb916bfa3 3 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=81c955f67d81cacb3672221f895e871d3fb81949028db36d1da09d5cb916bfa3 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.shz 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.shz 00:17:01.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.shz 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78622 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 78622 ']' 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.852 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.u2O 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.NOE ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NOE 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kKw 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.3DX ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3DX 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.fQE 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.yR0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yR0 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Z58 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4cR ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4cR 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.shz 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:02.110 18:38:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:02.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.676 Waiting for block devices as requested 00:17:02.676 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.676 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:03.241 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:03.242 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:03.242 No valid GPT data, bailing 00:17:03.242 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:03.500 No valid GPT data, bailing 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:03.500 No valid GPT data, bailing 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:03.500 No valid GPT data, bailing 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:03.500 18:38:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -a 10.0.0.1 -t tcp -s 4420 00:17:03.759 00:17:03.759 Discovery Log Number of Records 2, Generation counter 2 00:17:03.759 =====Discovery Log Entry 0====== 00:17:03.759 trtype: tcp 00:17:03.759 adrfam: ipv4 00:17:03.759 subtype: current discovery subsystem 00:17:03.759 treq: not specified, sq flow control disable supported 00:17:03.759 portid: 1 00:17:03.759 trsvcid: 4420 00:17:03.759 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:03.759 traddr: 10.0.0.1 00:17:03.759 eflags: none 00:17:03.759 sectype: none 00:17:03.759 =====Discovery Log Entry 1====== 00:17:03.759 trtype: tcp 00:17:03.759 adrfam: ipv4 00:17:03.759 subtype: nvme subsystem 00:17:03.759 treq: not specified, sq flow control disable supported 00:17:03.759 portid: 1 00:17:03.759 trsvcid: 4420 00:17:03.759 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:03.759 traddr: 10.0.0.1 00:17:03.759 eflags: none 00:17:03.759 sectype: none 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.759 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.018 nvme0n1 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.018 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.019 nvme0n1 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.019 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 nvme0n1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.279 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.538 nvme0n1 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.538 18:38:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.538 nvme0n1 00:17:04.538 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.538 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.538 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.538 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.538 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.538 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.795 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.796 nvme0n1 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:04.796 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.360 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.361 nvme0n1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.361 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.671 nvme0n1 00:17:05.672 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.672 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.672 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.672 18:38:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.672 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.672 18:38:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.672 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.930 nvme0n1 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.930 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.931 nvme0n1 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.931 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.189 nvme0n1 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:06.189 18:38:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.124 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.125 nvme0n1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.125 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.383 nvme0n1 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.383 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.641 18:38:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.641 nvme0n1 00:17:07.641 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.641 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.641 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.641 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.641 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.641 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.899 nvme0n1 00:17:07.899 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.158 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.416 nvme0n1 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.416 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.417 18:38:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.316 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.575 nvme0n1 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.575 18:38:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.142 nvme0n1 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.142 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.401 nvme0n1 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.401 18:38:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.968 nvme0n1 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.968 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.226 nvme0n1 00:17:12.226 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.226 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.226 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:12.226 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.226 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.226 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.486 18:38:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.053 nvme0n1 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.053 18:38:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.619 nvme0n1 00:17:13.619 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.619 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.619 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.619 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.619 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.619 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.877 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.445 nvme0n1 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.445 18:38:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.382 nvme0n1 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.382 18:38:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.949 nvme0n1 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.949 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 nvme0n1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 nvme0n1 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.208 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.467 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.468 nvme0n1 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.468 18:38:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.727 nvme0n1 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.727 nvme0n1 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.727 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.985 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.985 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.985 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.985 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.985 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.985 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.986 nvme0n1 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.986 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.244 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.245 nvme0n1 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.245 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.503 nvme0n1 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:17.503 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.504 18:38:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 nvme0n1 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.762 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.763 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.074 nvme0n1 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.074 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 nvme0n1 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.333 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.591 nvme0n1 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.591 18:38:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 nvme0n1 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.849 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.107 nvme0n1 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.107 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.365 nvme0n1 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.365 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.366 18:38:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.933 nvme0n1 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.933 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.191 nvme0n1 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.191 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.450 18:38:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.709 nvme0n1 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.709 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.275 nvme0n1 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:21.275 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.276 18:38:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.534 nvme0n1 00:17:21.534 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.534 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.534 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.534 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.534 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.792 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.793 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.360 nvme0n1 00:17:22.360 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.360 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.360 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.360 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.360 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.360 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.618 18:38:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.185 nvme0n1 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.185 18:38:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.118 nvme0n1 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.118 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.119 18:38:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.701 nvme0n1 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.701 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.272 nvme0n1 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.272 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.530 nvme0n1 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.530 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.531 18:38:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.789 nvme0n1 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.789 nvme0n1 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.789 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.790 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.790 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:26.048 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 nvme0n1 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.049 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.307 nvme0n1 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.307 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.566 nvme0n1 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.566 18:38:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.566 nvme0n1 00:17:26.566 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.566 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.566 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.566 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.566 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.566 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.825 nvme0n1 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.825 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 nvme0n1 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.084 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.085 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.085 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.085 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.085 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.085 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.085 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.342 nvme0n1 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:27.342 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.343 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.601 nvme0n1 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.601 18:38:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.601 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.859 nvme0n1 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.859 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 nvme0n1 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.118 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.377 nvme0n1 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.377 18:38:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.636 nvme0n1 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.636 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.203 nvme0n1 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.203 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.461 nvme0n1 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.461 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.719 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.720 18:38:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.978 nvme0n1 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.978 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.236 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.494 nvme0n1 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.494 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.495 18:38:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 nvme0n1 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjMyMDg2ZWRjODk4NzBhZTExYmM2Nzg1Njc0NzJkOGUwkXf9: 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmZkYzI1MmE0YjU3NmY2ZDEzNjA5ZDUxZGVlZDIwY2UxYzczOWUzODFhNmJjOWFiM2YwYWFhOTllNjEyNGIxZUUYZ4Y=: 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.060 18:38:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.625 nvme0n1 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.625 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.557 nvme0n1 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Zjk4MTYwNjhiNTlmMWZhNzU0NzY2NDA0MTY2MTI0ZjhQxdKI: 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: ]] 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjNjMDZjZDY4YTJmOTg2MmI3NzU4ODE2ZWJlNjQ0MzYRdP9/: 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.557 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.558 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.558 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.558 18:38:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.558 18:38:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.558 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.558 18:38:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.123 nvme0n1 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWYzMTQ4ZTJiNDAxYjZlZmI3MDcwOWEzZTNlZDIwMDZmZmZkMmZhOWY3OGZkMjc5IzvYgA==: 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZWZkYTZmNTczMDMyN2FiZjM0ZDE1NjFhZjUwMDkyNmaVBtOD: 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.123 18:38:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.689 nvme0n1 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.689 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODFjOTU1ZjY3ZDgxY2FjYjM2NzIyMjFmODk1ZTg3MWQzZmI4MTk0OTAyOGRiMzZkMWRhMDlkNWNiOTE2YmZhM3Dptt0=: 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.947 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.514 nvme0n1 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNmMTY3MWQwNTk2ZDViYmFjM2YyZTI4YThiNzVjY2Y5OTZiMzBhZWNmZWJjZDc5gb8lhg==: 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NGFhMzI3YWM2ZjMyMjUzYzE2NDU1MjdjMzZlZjNmYjNkYTVmYWU1ZDM2OWE3ZTY2rwjt8A==: 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.514 request: 00:17:34.514 { 00:17:34.514 "name": "nvme0", 00:17:34.514 "trtype": "tcp", 00:17:34.514 "traddr": "10.0.0.1", 00:17:34.514 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:34.514 "adrfam": "ipv4", 00:17:34.514 "trsvcid": "4420", 00:17:34.514 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:34.514 "method": "bdev_nvme_attach_controller", 00:17:34.514 "req_id": 1 00:17:34.514 } 00:17:34.514 Got JSON-RPC error response 00:17:34.514 response: 00:17:34.514 { 00:17:34.514 "code": -32602, 00:17:34.514 "message": "Invalid parameters" 00:17:34.514 } 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.514 18:38:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.773 request: 00:17:34.773 { 00:17:34.773 "name": "nvme0", 00:17:34.773 "trtype": "tcp", 00:17:34.773 "traddr": "10.0.0.1", 00:17:34.773 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:34.773 "adrfam": "ipv4", 00:17:34.773 "trsvcid": "4420", 00:17:34.773 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:34.773 "dhchap_key": "key2", 00:17:34.773 "method": "bdev_nvme_attach_controller", 00:17:34.773 "req_id": 1 00:17:34.773 } 00:17:34.773 Got JSON-RPC error response 00:17:34.773 response: 00:17:34.773 { 00:17:34.773 "code": -32602, 00:17:34.773 "message": "Invalid parameters" 00:17:34.773 } 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.773 request: 00:17:34.773 { 00:17:34.773 "name": "nvme0", 00:17:34.773 "trtype": "tcp", 00:17:34.773 "traddr": "10.0.0.1", 00:17:34.773 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:34.773 "adrfam": "ipv4", 00:17:34.773 "trsvcid": "4420", 00:17:34.773 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:34.773 "dhchap_key": "key1", 00:17:34.773 "dhchap_ctrlr_key": "ckey2", 00:17:34.773 "method": "bdev_nvme_attach_controller", 00:17:34.773 "req_id": 1 00:17:34.773 } 00:17:34.773 Got JSON-RPC error response 00:17:34.773 response: 00:17:34.773 { 00:17:34.773 "code": -32602, 00:17:34.773 "message": "Invalid parameters" 00:17:34.773 } 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:34.773 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.774 rmmod nvme_tcp 00:17:34.774 rmmod nvme_fabrics 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78622 ']' 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78622 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 78622 ']' 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 78622 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78622 00:17:34.774 killing process with pid 78622 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78622' 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 78622 00:17:34.774 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 78622 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:35.098 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:35.373 18:38:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:35.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:35.938 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:35.938 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:36.197 18:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.u2O /tmp/spdk.key-null.kKw /tmp/spdk.key-sha256.fQE /tmp/spdk.key-sha384.Z58 /tmp/spdk.key-sha512.shz /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:36.197 18:38:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:36.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.455 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.455 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.455 ************************************ 00:17:36.455 END TEST nvmf_auth_host 00:17:36.455 ************************************ 00:17:36.455 00:17:36.455 real 0m37.015s 00:17:36.455 user 0m33.135s 00:17:36.455 sys 0m3.972s 00:17:36.455 18:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:36.455 18:38:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.455 18:38:49 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:36.455 18:38:49 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:36.455 18:38:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:36.455 18:38:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.455 18:38:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:36.714 ************************************ 00:17:36.714 START TEST nvmf_digest 00:17:36.714 ************************************ 00:17:36.714 18:38:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:36.714 * Looking for test storage... 00:17:36.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:36.714 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:36.715 Cannot find device "nvmf_tgt_br" 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:36.715 Cannot find device "nvmf_tgt_br2" 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:36.715 Cannot find device "nvmf_tgt_br" 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:36.715 Cannot find device "nvmf_tgt_br2" 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:36.715 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:36.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:36.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:36.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:17:36.974 00:17:36.974 --- 10.0.0.2 ping statistics --- 00:17:36.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.974 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:36.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:36.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:17:36.974 00:17:36.974 --- 10.0.0.3 ping statistics --- 00:17:36.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.974 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:36.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:17:36.974 00:17:36.974 --- 10.0.0.1 ping statistics --- 00:17:36.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.974 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:36.974 ************************************ 00:17:36.974 START TEST nvmf_digest_clean 00:17:36.974 ************************************ 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80206 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80206 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 80206 ']' 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:36.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:36.974 18:38:50 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:37.232 [2024-05-16 18:38:50.519269] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:37.232 [2024-05-16 18:38:50.519569] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.232 [2024-05-16 18:38:50.663665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.497 [2024-05-16 18:38:50.824716] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.497 [2024-05-16 18:38:50.824804] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.497 [2024-05-16 18:38:50.824848] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.497 [2024-05-16 18:38:50.824860] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.497 [2024-05-16 18:38:50.824869] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.497 [2024-05-16 18:38:50.824909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.066 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.324 [2024-05-16 18:38:51.633061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:38.324 null0 00:17:38.324 [2024-05-16 18:38:51.694177] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.324 [2024-05-16 18:38:51.718106] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:38.324 [2024-05-16 18:38:51.718385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80238 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80238 /var/tmp/bperf.sock 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 80238 ']' 00:17:38.324 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:38.325 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:38.325 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:38.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:38.325 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:38.325 18:38:51 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:38.325 [2024-05-16 18:38:51.767668] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:38.325 [2024-05-16 18:38:51.768009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80238 ] 00:17:38.582 [2024-05-16 18:38:51.903059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.582 [2024-05-16 18:38:52.010407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.516 18:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:39.516 18:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:17:39.516 18:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:39.516 18:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:39.516 18:38:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:39.774 [2024-05-16 18:38:53.088454] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:39.774 18:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:39.774 18:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.033 nvme0n1 00:17:40.033 18:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:40.033 18:38:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:40.293 Running I/O for 2 seconds... 00:17:42.194 00:17:42.194 Latency(us) 00:17:42.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.194 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:42.194 nvme0n1 : 2.01 14927.91 58.31 0.00 0.00 8567.32 7804.74 22520.55 00:17:42.194 =================================================================================================================== 00:17:42.194 Total : 14927.91 58.31 0.00 0.00 8567.32 7804.74 22520.55 00:17:42.194 0 00:17:42.194 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:42.194 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:42.194 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:42.194 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:42.194 | select(.opcode=="crc32c") 00:17:42.194 | "\(.module_name) \(.executed)"' 00:17:42.194 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80238 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 80238 ']' 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 80238 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80238 00:17:42.453 killing process with pid 80238 00:17:42.453 Received shutdown signal, test time was about 2.000000 seconds 00:17:42.453 00:17:42.453 Latency(us) 00:17:42.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.453 =================================================================================================================== 00:17:42.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80238' 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 80238 00:17:42.453 18:38:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 80238 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80298 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80298 /var/tmp/bperf.sock 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 80298 ']' 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:42.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:42.711 18:38:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:42.970 [2024-05-16 18:38:56.250490] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:42.970 [2024-05-16 18:38:56.250981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80298 ] 00:17:42.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:42.970 Zero copy mechanism will not be used. 00:17:42.970 [2024-05-16 18:38:56.396556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.229 [2024-05-16 18:38:56.479323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.796 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:43.796 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:17:43.796 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:43.797 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:43.797 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:44.056 [2024-05-16 18:38:57.538194] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:44.315 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.315 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.574 nvme0n1 00:17:44.574 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:44.574 18:38:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.574 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:44.574 Zero copy mechanism will not be used. 00:17:44.574 Running I/O for 2 seconds... 00:17:47.106 00:17:47.106 Latency(us) 00:17:47.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.106 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:47.106 nvme0n1 : 2.00 6446.59 805.82 0.00 0.00 2478.04 2278.87 4200.26 00:17:47.106 =================================================================================================================== 00:17:47.106 Total : 6446.59 805.82 0.00 0.00 2478.04 2278.87 4200.26 00:17:47.106 0 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:47.106 | select(.opcode=="crc32c") 00:17:47.106 | "\(.module_name) \(.executed)"' 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80298 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 80298 ']' 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 80298 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:47.106 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80298 00:17:47.107 killing process with pid 80298 00:17:47.107 Received shutdown signal, test time was about 2.000000 seconds 00:17:47.107 00:17:47.107 Latency(us) 00:17:47.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.107 =================================================================================================================== 00:17:47.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80298' 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 80298 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 80298 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80357 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80357 /var/tmp/bperf.sock 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 80357 ']' 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:47.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:47.107 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.365 [2024-05-16 18:39:00.616529] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:47.365 [2024-05-16 18:39:00.616805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80357 ] 00:17:47.365 [2024-05-16 18:39:00.751909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.365 [2024-05-16 18:39:00.829777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.624 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.624 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:17:47.624 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:47.624 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:47.624 18:39:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:47.882 [2024-05-16 18:39:01.153125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:47.882 18:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:47.882 18:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.141 nvme0n1 00:17:48.141 18:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:48.141 18:39:01 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.141 Running I/O for 2 seconds... 00:17:50.672 00:17:50.672 Latency(us) 00:17:50.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.672 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.672 nvme0n1 : 2.01 15959.86 62.34 0.00 0.00 8013.08 2532.07 15252.01 00:17:50.672 =================================================================================================================== 00:17:50.672 Total : 15959.86 62.34 0.00 0.00 8013.08 2532.07 15252.01 00:17:50.672 0 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:50.672 | select(.opcode=="crc32c") 00:17:50.672 | "\(.module_name) \(.executed)"' 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80357 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 80357 ']' 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 80357 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80357 00:17:50.672 killing process with pid 80357 00:17:50.672 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.672 00:17:50.672 Latency(us) 00:17:50.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.672 =================================================================================================================== 00:17:50.672 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80357' 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 80357 00:17:50.672 18:39:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 80357 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80410 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80410 /var/tmp/bperf.sock 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 80410 ']' 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:50.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.931 18:39:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:50.931 [2024-05-16 18:39:04.267746] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:50.931 [2024-05-16 18:39:04.268096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80410 ] 00:17:50.931 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:50.931 Zero copy mechanism will not be used. 00:17:50.931 [2024-05-16 18:39:04.409337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.189 [2024-05-16 18:39:04.509546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.755 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:51.755 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:17:51.755 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:51.755 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:51.755 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:52.321 [2024-05-16 18:39:05.526246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:52.321 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.321 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.579 nvme0n1 00:17:52.579 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:52.579 18:39:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:52.579 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:52.579 Zero copy mechanism will not be used. 00:17:52.579 Running I/O for 2 seconds... 00:17:55.130 00:17:55.130 Latency(us) 00:17:55.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.130 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:55.130 nvme0n1 : 2.00 6249.75 781.22 0.00 0.00 2553.69 1936.29 7357.91 00:17:55.130 =================================================================================================================== 00:17:55.130 Total : 6249.75 781.22 0.00 0.00 2553.69 1936.29 7357.91 00:17:55.130 0 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:55.130 | select(.opcode=="crc32c") 00:17:55.130 | "\(.module_name) \(.executed)"' 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80410 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 80410 ']' 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 80410 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80410 00:17:55.130 killing process with pid 80410 00:17:55.130 Received shutdown signal, test time was about 2.000000 seconds 00:17:55.130 00:17:55.130 Latency(us) 00:17:55.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.130 =================================================================================================================== 00:17:55.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80410' 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 80410 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 80410 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80206 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 80206 ']' 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 80206 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:55.130 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80206 00:17:55.388 killing process with pid 80206 00:17:55.388 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:55.388 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:55.388 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80206' 00:17:55.388 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 80206 00:17:55.388 [2024-05-16 18:39:08.650985] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:55.388 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 80206 00:17:55.646 ************************************ 00:17:55.646 END TEST nvmf_digest_clean 00:17:55.646 ************************************ 00:17:55.646 00:17:55.646 real 0m18.503s 00:17:55.646 user 0m35.068s 00:17:55.646 sys 0m5.229s 00:17:55.646 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:55.646 18:39:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:55.646 18:39:08 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:55.646 18:39:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:17:55.646 18:39:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:55.646 18:39:08 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:55.646 ************************************ 00:17:55.646 START TEST nvmf_digest_error 00:17:55.646 ************************************ 00:17:55.646 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:17:55.646 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:55.646 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:55.646 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:55.646 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80499 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80499 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 80499 ']' 00:17:55.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:55.647 18:39:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:55.647 [2024-05-16 18:39:09.065424] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:55.647 [2024-05-16 18:39:09.065541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.906 [2024-05-16 18:39:09.197353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.906 [2024-05-16 18:39:09.344003] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.906 [2024-05-16 18:39:09.344080] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.906 [2024-05-16 18:39:09.344108] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.906 [2024-05-16 18:39:09.344116] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.906 [2024-05-16 18:39:09.344123] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.906 [2024-05-16 18:39:09.344155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.843 [2024-05-16 18:39:10.069241] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.843 [2024-05-16 18:39:10.156411] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:56.843 null0 00:17:56.843 [2024-05-16 18:39:10.218493] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.843 [2024-05-16 18:39:10.242406] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:56.843 [2024-05-16 18:39:10.242688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80531 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80531 /var/tmp/bperf.sock 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 80531 ']' 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:56.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:56.843 18:39:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:56.843 [2024-05-16 18:39:10.300023] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:17:56.843 [2024-05-16 18:39:10.300320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80531 ] 00:17:57.103 [2024-05-16 18:39:10.441346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.103 [2024-05-16 18:39:10.596269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.362 [2024-05-16 18:39:10.673977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:57.930 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:57.930 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:17:57.930 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:57.930 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:58.188 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:58.188 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.188 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.188 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.188 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.188 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:58.446 nvme0n1 00:17:58.446 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:58.446 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.446 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:58.446 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.446 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:58.446 18:39:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:58.706 Running I/O for 2 seconds... 00:17:58.706 [2024-05-16 18:39:12.034329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.034404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.034419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.052201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.052240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.052269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.070457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.070498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.070513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.088048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.088087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.088101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.106364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.106403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.106417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.124293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.124332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.124347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.141776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.141815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.141859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.159706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.159745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.159760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.177800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.177852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.177866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.706 [2024-05-16 18:39:12.196236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.706 [2024-05-16 18:39:12.196303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.706 [2024-05-16 18:39:12.196318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.214478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.214516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.214530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.232305] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.232346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.232361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.250193] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.250261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.250276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.267699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.267737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.267751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.285403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.285441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.285455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.303165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.303226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.303240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.321377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.321415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.321429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.339750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.339787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.339815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.357853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.357898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.357913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.376007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.376044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.376072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.394075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.394110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.394137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.412242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.412301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.412315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.430628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.430686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.430715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.965 [2024-05-16 18:39:12.448707] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:58.965 [2024-05-16 18:39:12.448745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.965 [2024-05-16 18:39:12.448759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.466555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.466593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.466607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.484497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.484534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.484548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.502338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.502375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.502389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.520140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.520193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.520206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.537972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.538010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.555574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.555624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.555653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.573477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.573523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.573536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.591214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.591252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.591265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.608669] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.608713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.608726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.625918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.625955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.625969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.643308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.643346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.643359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.660717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.660755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.660769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.678229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.678297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.678311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.695724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.695764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.695779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.224 [2024-05-16 18:39:12.713536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.224 [2024-05-16 18:39:12.713572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.224 [2024-05-16 18:39:12.713586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.731416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.731459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.731473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.749339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.749382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.749396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.767184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.767233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.767246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.785083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.785136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.785151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.803073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.803110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.803124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.821056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.821094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.821107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.838590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.838628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.838643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.856016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.856053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.856066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.873840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.873888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.873902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.891378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.891421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.891435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.909011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.909046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.909074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.927295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.927333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.927347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.945476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.945513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.945527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.963636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.963675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.963688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.482 [2024-05-16 18:39:12.981864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.482 [2024-05-16 18:39:12.981924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.482 [2024-05-16 18:39:12.981937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.000258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.000320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.000333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.017953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.018007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.018020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.035797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.035846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.035861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.053717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.053754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.053767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.071662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.071699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.071713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.089440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.089477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.089490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.107365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.107403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.107417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.124887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.124927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.124940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.142688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.741 [2024-05-16 18:39:13.142725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.741 [2024-05-16 18:39:13.142739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.741 [2024-05-16 18:39:13.168312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.742 [2024-05-16 18:39:13.168351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.742 [2024-05-16 18:39:13.168365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.742 [2024-05-16 18:39:13.186369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.742 [2024-05-16 18:39:13.186407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.742 [2024-05-16 18:39:13.186420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.742 [2024-05-16 18:39:13.204086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.742 [2024-05-16 18:39:13.204130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.742 [2024-05-16 18:39:13.204143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.742 [2024-05-16 18:39:13.221908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.742 [2024-05-16 18:39:13.221944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.742 [2024-05-16 18:39:13.221958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:59.742 [2024-05-16 18:39:13.239733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:17:59.742 [2024-05-16 18:39:13.239771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:59.742 [2024-05-16 18:39:13.239785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.001 [2024-05-16 18:39:13.257559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.001 [2024-05-16 18:39:13.257596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.001 [2024-05-16 18:39:13.257609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.001 [2024-05-16 18:39:13.275535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.001 [2024-05-16 18:39:13.275572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.001 [2024-05-16 18:39:13.275597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.001 [2024-05-16 18:39:13.292958] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.001 [2024-05-16 18:39:13.292996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.001 [2024-05-16 18:39:13.293009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.001 [2024-05-16 18:39:13.310420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.001 [2024-05-16 18:39:13.310467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.001 [2024-05-16 18:39:13.310481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.001 [2024-05-16 18:39:13.327906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.001 [2024-05-16 18:39:13.327962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.001 [2024-05-16 18:39:13.327992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.001 [2024-05-16 18:39:13.346137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.001 [2024-05-16 18:39:13.346189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.001 [2024-05-16 18:39:13.346218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.001 [2024-05-16 18:39:13.364297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.001 [2024-05-16 18:39:13.364335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.001 [2024-05-16 18:39:13.364349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.002 [2024-05-16 18:39:13.382577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.002 [2024-05-16 18:39:13.382614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.002 [2024-05-16 18:39:13.382627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.002 [2024-05-16 18:39:13.400583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.002 [2024-05-16 18:39:13.400621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.002 [2024-05-16 18:39:13.400634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.002 [2024-05-16 18:39:13.418906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.002 [2024-05-16 18:39:13.418982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.002 [2024-05-16 18:39:13.419012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.002 [2024-05-16 18:39:13.437117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.002 [2024-05-16 18:39:13.437152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.002 [2024-05-16 18:39:13.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.002 [2024-05-16 18:39:13.455411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.002 [2024-05-16 18:39:13.455448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.002 [2024-05-16 18:39:13.455461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.002 [2024-05-16 18:39:13.481443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.002 [2024-05-16 18:39:13.481480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.002 [2024-05-16 18:39:13.481494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.261 [2024-05-16 18:39:13.502377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.261 [2024-05-16 18:39:13.502415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.261 [2024-05-16 18:39:13.502429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.261 [2024-05-16 18:39:13.522979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.261 [2024-05-16 18:39:13.523045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.261 [2024-05-16 18:39:13.523074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.261 [2024-05-16 18:39:13.543712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.261 [2024-05-16 18:39:13.543750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.261 [2024-05-16 18:39:13.543764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.261 [2024-05-16 18:39:13.564891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.261 [2024-05-16 18:39:13.564972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.261 [2024-05-16 18:39:13.565003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.261 [2024-05-16 18:39:13.585948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.261 [2024-05-16 18:39:13.586027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.261 [2024-05-16 18:39:13.586055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.261 [2024-05-16 18:39:13.607036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.261 [2024-05-16 18:39:13.607091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.607121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.624876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.624958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.624988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.643131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.643164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.643219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.661156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.661205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.661232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.679046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.679077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.679104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.696687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.696732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.696746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.714024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.714059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.714072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.731858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.731917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.731930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.262 [2024-05-16 18:39:13.749550] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.262 [2024-05-16 18:39:13.749583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.262 [2024-05-16 18:39:13.749604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.767289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.767323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.767336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.784844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.784887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.784900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.802692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.802731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.802744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.819809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.819850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.819864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.836898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.836932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.836945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.854309] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.854358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.854371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.871663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.871697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.871709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.889062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.889092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.520 [2024-05-16 18:39:13.889105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.520 [2024-05-16 18:39:13.907044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.520 [2024-05-16 18:39:13.907090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.521 [2024-05-16 18:39:13.907102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.521 [2024-05-16 18:39:13.925047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.521 [2024-05-16 18:39:13.925093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.521 [2024-05-16 18:39:13.925104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.521 [2024-05-16 18:39:13.942464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.521 [2024-05-16 18:39:13.942497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.521 [2024-05-16 18:39:13.942521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.521 [2024-05-16 18:39:13.960775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.521 [2024-05-16 18:39:13.960809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.521 [2024-05-16 18:39:13.960832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.521 [2024-05-16 18:39:13.978355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.521 [2024-05-16 18:39:13.978409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.521 [2024-05-16 18:39:13.978422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.521 [2024-05-16 18:39:13.996092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.521 [2024-05-16 18:39:13.996136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.521 [2024-05-16 18:39:13.996159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.521 [2024-05-16 18:39:14.013416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x112b1e0) 00:18:00.521 [2024-05-16 18:39:14.013464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:00.521 [2024-05-16 18:39:14.013476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.521 00:18:00.521 Latency(us) 00:18:00.521 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.521 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:00.521 nvme0n1 : 2.01 13967.91 54.56 0.00 0.00 9155.20 3813.00 34793.66 00:18:00.521 =================================================================================================================== 00:18:00.521 Total : 13967.91 54.56 0.00 0.00 9155.20 3813.00 34793.66 00:18:00.521 0 00:18:00.779 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:00.779 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:00.779 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:00.779 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:00.779 | .driver_specific 00:18:00.779 | .nvme_error 00:18:00.779 | .status_code 00:18:00.779 | .command_transient_transport_error' 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80531 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 80531 ']' 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 80531 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80531 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80531' 00:18:01.038 killing process with pid 80531 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 80531 00:18:01.038 Received shutdown signal, test time was about 2.000000 seconds 00:18:01.038 00:18:01.038 Latency(us) 00:18:01.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.038 =================================================================================================================== 00:18:01.038 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.038 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 80531 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80587 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80587 /var/tmp/bperf.sock 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 80587 ']' 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:01.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:01.297 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:01.298 18:39:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:01.298 [2024-05-16 18:39:14.769601] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:01.298 [2024-05-16 18:39:14.769749] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80587 ] 00:18:01.298 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:01.298 Zero copy mechanism will not be used. 00:18:01.556 [2024-05-16 18:39:14.907810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.814 [2024-05-16 18:39:15.058741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.814 [2024-05-16 18:39:15.134175] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:02.381 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.381 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:18:02.381 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.381 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:02.640 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:02.640 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.640 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.640 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.640 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.640 18:39:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.899 nvme0n1 00:18:02.899 18:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:02.899 18:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.899 18:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:02.899 18:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.899 18:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:02.899 18:39:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:03.160 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:03.160 Zero copy mechanism will not be used. 00:18:03.160 Running I/O for 2 seconds... 00:18:03.160 [2024-05-16 18:39:16.446624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.446679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.446695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.450941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.450993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.451006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.455477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.455513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.455526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.459822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.459866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.459879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.464425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.464473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.464487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.468877] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.468922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.468936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.473292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.473339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.473351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.477833] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.477879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.477892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.482218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.482266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.482293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.486649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.486683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.486697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.490995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.491042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.491055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.495508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.495556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.495568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.500005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.500052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.500065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.504558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.504609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.504622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.509014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.509047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.509061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.513330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.513391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.513409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.517753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.517787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.517800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.522175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.522224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.522235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.526726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.526761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.526774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.531054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.531101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.531113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.535428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.535462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.535475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.539998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.540045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.540057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.544612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.544646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.544659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.549189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.549237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.549250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.553448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.160 [2024-05-16 18:39:16.553494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.160 [2024-05-16 18:39:16.553506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.160 [2024-05-16 18:39:16.557806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.557862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.557874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.562115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.562160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.562172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.566343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.566389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.566401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.570543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.570589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.570617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.574984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.575030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.575042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.579079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.579141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.579154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.583377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.583410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.583423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.588003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.588047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.588060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.592504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.592550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.592561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.597198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.597249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.597262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.601546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.601594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.601624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.606036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.606081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.606093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.610365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.610411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.610423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.614606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.614640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.614653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.618969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.619015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.619027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.623322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.623355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.623368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.627871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.627948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.627975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.632552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.632586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.632599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.636888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.636952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.636980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.641414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.641445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.641456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.645871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.645943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.645971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.650555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.650618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.650631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.161 [2024-05-16 18:39:16.654972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.161 [2024-05-16 18:39:16.655032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.161 [2024-05-16 18:39:16.655044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.659526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.659576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.659588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.663899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.663961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.663974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.668483] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.668517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.668530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.673093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.673123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.673135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.677455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.677484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.677496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.681937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.682012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.682040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.686448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.686480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.686493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.690825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.690883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.690896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.695162] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.695220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.695233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.699441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.699474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.699487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.704242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.424 [2024-05-16 18:39:16.704291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.424 [2024-05-16 18:39:16.704304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.424 [2024-05-16 18:39:16.708631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.708678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.708689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.712930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.712977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.712989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.717092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.717138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.717149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.721225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.721272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.721284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.725742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.725791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.725820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.730423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.730454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.730466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.734871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.734928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.734955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.739255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.739289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.739302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.743842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.743903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.743932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.748408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.748440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.748452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.752896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.753002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.753014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.757419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.757465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.757477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.761832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.761874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.761888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.766209] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.766255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.766267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.770502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.770548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.770559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.775043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.775089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.775101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.779377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.779410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.779423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.783739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.783773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.783785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.788279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.788327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.788348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.792617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.792665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.792678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.797266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.797300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.797313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.801724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.801757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.801770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.806292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.806323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.806335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.810644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.810678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.810691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.815099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.815144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.819412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.819446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.819459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.823927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.824021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.824033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.828512] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.425 [2024-05-16 18:39:16.828544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.425 [2024-05-16 18:39:16.828555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.425 [2024-05-16 18:39:16.832956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.833014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.833026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.837303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.837350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.837377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.841857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.841899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.841912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.846347] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.846392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.846404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.850790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.850860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.850873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.855098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.855142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.855153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.859282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.859314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.859327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.863410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.863458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.863486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.867821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.867880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.867896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.872134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.872178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.872207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.876634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.876695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.876707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.881461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.881507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.881518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.885790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.885835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.885848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.890170] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.890217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.890244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.894471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.894519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.894530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.898991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.899036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.899048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.903458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.903491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.903504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.907939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.908001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.908013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.912486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.912533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.912546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.917103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.917148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.917159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.426 [2024-05-16 18:39:16.921570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.426 [2024-05-16 18:39:16.921631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.426 [2024-05-16 18:39:16.921644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.925855] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.925898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.925911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.930443] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.930467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.930479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.934998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.935047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.935060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.939535] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.939568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.939592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.943967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.944014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.944027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.948538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.948585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.948615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.952890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.952921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.952934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.957411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.957458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.957471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.961819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.961862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.961875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.966307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.966354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.966366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.970716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.970750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.970763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.975052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.975086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.975099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.979488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.979523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.979536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.983906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.983975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.983987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.988466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.988500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.988514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.993175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.993221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.993234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:16.997665] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:16.997713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:16.997726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.002106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.002151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.002162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.006397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.006443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.006455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.010746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.010795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.010808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.015230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.015280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.019500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.019563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.019575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.023977] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.024021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.024032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.028333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.028379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.028392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.032824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.032881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.032894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.037359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.037405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.037416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.041718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.688 [2024-05-16 18:39:17.041752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.688 [2024-05-16 18:39:17.041765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.688 [2024-05-16 18:39:17.046183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.046228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.046239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.050455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.050500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.050511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.054922] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.054973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.054995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.059349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.059382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.059394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.063702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.063735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.063749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.068006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.068039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.068052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.072340] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.072373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.072386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.076573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.076607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.076620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.081024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.081065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.081079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.085354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.085407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.085420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.090024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.090070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.090099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.094621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.094655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.094667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.099023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.099071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.099099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.103514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.103548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.103561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.107722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.107755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.107768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.112054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.112100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.112112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.116420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.116453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.116466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.120828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.120870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.120884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.125362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.125396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.125409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.130036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.130083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.130096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.134267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.134300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.134313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.138516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.138550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.138562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.142844] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.142887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.142899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.147076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.147108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.147121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.689 [2024-05-16 18:39:17.151490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.689 [2024-05-16 18:39:17.151523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.689 [2024-05-16 18:39:17.151536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.155982] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.156030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.156041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.160338] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.160371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.160384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.164763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.164812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.164825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.169541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.169590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.169603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.173891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.173961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.173973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.178069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.178115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.178126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.182085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.182131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.182142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.690 [2024-05-16 18:39:17.186405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.690 [2024-05-16 18:39:17.186452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.690 [2024-05-16 18:39:17.186479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.190821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.190863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.190876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.195312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.195344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.195357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.199736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.199770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.199784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.204152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.204198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.204210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.208617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.208650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.208662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.213032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.213077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.213090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.217412] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.217459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.217471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.221918] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.221970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.221998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.226265] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.226310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.226322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.230647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.230681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.230693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.235124] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.235212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.235226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.239781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.239814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.239842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.950 [2024-05-16 18:39:17.244174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.950 [2024-05-16 18:39:17.244220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.950 [2024-05-16 18:39:17.244232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.248633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.248672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.248684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.253111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.253158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.253170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.257480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.257529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.257541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.261887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.261946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.261993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.266397] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.266430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.266443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.270962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.271034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.271047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.275555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.275589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.275602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.280021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.280066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.280077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.284469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.284517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.284529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.289048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.289094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.289105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.293416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.293463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.293475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.297744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.297793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.297806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.302097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.302143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.302154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.306357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.306421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.306434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.310545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.310592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.310621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.314749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.314798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.314811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.319351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.319383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.319396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.323981] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.324041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.324069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.328496] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.328543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.328556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.333014] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.333058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.333070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.337419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.337465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.337477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.341773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.341807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.341830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.346219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.346264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.346276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.350720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.350754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.350766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.355185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.355216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.355229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.359480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.359513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.359526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.363775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.363810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.951 [2024-05-16 18:39:17.363836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.951 [2024-05-16 18:39:17.367985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.951 [2024-05-16 18:39:17.368017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.368031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.372244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.372279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.372292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.376469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.376505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.376518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.380842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.380877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.380890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.385043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.385076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.385088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.389239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.389273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.389286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.393594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.393632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.393645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.397984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.398020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.398033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.402559] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.402607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.402626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.406915] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.406972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.406986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.411409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.411447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.411460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.415850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.415885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.415899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.420176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.420212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.420228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.424563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.424600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.424620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.428949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.428985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.428998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.433349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.433387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.433401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.437838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.437879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.437892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.442110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.442146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.442159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:03.952 [2024-05-16 18:39:17.446356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:03.952 [2024-05-16 18:39:17.446391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:03.952 [2024-05-16 18:39:17.446404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.212 [2024-05-16 18:39:17.450737] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.212 [2024-05-16 18:39:17.450773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.212 [2024-05-16 18:39:17.450787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.212 [2024-05-16 18:39:17.455220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.455261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.455275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.459597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.459635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.459649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.463907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.463941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.463955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.468125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.468160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.468173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.472489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.472524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.472537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.476741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.476776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.476796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.481220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.481255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.481268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.485684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.485720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.485734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.490236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.490281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.490294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.494632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.494670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.494683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.499078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.499115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.499129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.503485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.503522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.503536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.507983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.508020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.508034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.512362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.512402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.512415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.516718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.516755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.516769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.521143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.521182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.521196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.525615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.525664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.525677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.529979] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.530014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.530027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.534505] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.534543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.534557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.538934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.538970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.538988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.543420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.543456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.543469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.548021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.548056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.548069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.552324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.552358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.552372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.556554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.556589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.556602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.560857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.560890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.560903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.565132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.565168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.565182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.569794] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.569858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.569878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.213 [2024-05-16 18:39:17.574390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.213 [2024-05-16 18:39:17.574426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.213 [2024-05-16 18:39:17.574439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.578631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.578666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.578680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.582975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.583010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.583023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.587392] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.587428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.587441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.591843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.591877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.591891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.596252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.596287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.596313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.600811] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.600858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.600871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.605128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.605162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.605176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.609507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.609544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.609557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.614006] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.614041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.614055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.618173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.618209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.622613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.622649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.622662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.626925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.626958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.626971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.631328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.631363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.631376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.635691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.635726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.635739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.639940] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.639973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.639985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.644606] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.644643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.644656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.648856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.648890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.648903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.653176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.653210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.653223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.657509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.657545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.657558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.662240] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.662277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.662290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.666840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.666888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.666902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.671119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.671152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.671166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.675482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.675519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.675531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.679693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.679727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.679741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.683837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.683869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.683882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.688136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.688171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.688184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.692452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.214 [2024-05-16 18:39:17.692497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.214 [2024-05-16 18:39:17.692517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.214 [2024-05-16 18:39:17.696973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.215 [2024-05-16 18:39:17.697007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.215 [2024-05-16 18:39:17.697021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.215 [2024-05-16 18:39:17.701282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.215 [2024-05-16 18:39:17.701317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.215 [2024-05-16 18:39:17.701330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.215 [2024-05-16 18:39:17.705524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.215 [2024-05-16 18:39:17.705558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.215 [2024-05-16 18:39:17.705571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.215 [2024-05-16 18:39:17.710012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.215 [2024-05-16 18:39:17.710045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.215 [2024-05-16 18:39:17.710058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.714508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.714542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.714555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.718753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.718787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.718800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.723043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.723076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.723089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.727226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.727270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.727283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.731446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.731482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.731497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.735893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.735929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.735942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.740140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.740173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.740186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.744408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.744442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.744455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.748746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.748782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.748795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.753161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.753196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.753210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.757482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.757518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.757531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.761997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.762032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.762046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.766289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.766353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.766365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.770812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.770872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.770893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.775410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.775443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.775455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.780036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.780086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.780099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.784288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.784339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.784352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.788740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.788805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.788818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.793243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.793277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.793291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.797503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.797539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.797552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.801761] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.801795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.801808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.805954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.805987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.806000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.810289] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.810324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.810337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.814694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.814730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.814743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.819072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.476 [2024-05-16 18:39:17.819106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.476 [2024-05-16 18:39:17.819119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.476 [2024-05-16 18:39:17.823320] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.823353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.823367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.827780] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.827839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.827858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.832127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.832163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.832176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.836367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.836401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.836415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.840905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.840940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.840953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.845172] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.845219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.845232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.849480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.849526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.849538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.853984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.854017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.854030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.858152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.858188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.858201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.862356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.862391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.862404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.866636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.866671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.866684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.870936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.870971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.870984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.875197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.875231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.875244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.879581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.879616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.879629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.883858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.883892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.883905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.888379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.888415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.888428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.892616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.892651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.892664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.896885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.896919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.896932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.901246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.901281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.901295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.905476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.905512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.905525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.909838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.909873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.909886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.914292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.914339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.914352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.918906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.918969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.918982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.923390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.923423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.923436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.928056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.928105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.928118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.932396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.932442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.932454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.936779] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.936813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.936839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.477 [2024-05-16 18:39:17.941184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.477 [2024-05-16 18:39:17.941232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.477 [2024-05-16 18:39:17.941245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.478 [2024-05-16 18:39:17.945574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.478 [2024-05-16 18:39:17.945625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.478 [2024-05-16 18:39:17.945639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.478 [2024-05-16 18:39:17.950126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.478 [2024-05-16 18:39:17.950159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.478 [2024-05-16 18:39:17.950172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.478 [2024-05-16 18:39:17.954699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.478 [2024-05-16 18:39:17.954734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.478 [2024-05-16 18:39:17.954747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.478 [2024-05-16 18:39:17.959074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.478 [2024-05-16 18:39:17.959121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.478 [2024-05-16 18:39:17.959135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.478 [2024-05-16 18:39:17.963433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.478 [2024-05-16 18:39:17.963466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.478 [2024-05-16 18:39:17.963478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.478 [2024-05-16 18:39:17.967768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.478 [2024-05-16 18:39:17.967802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.478 [2024-05-16 18:39:17.967815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.478 [2024-05-16 18:39:17.972224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.478 [2024-05-16 18:39:17.972272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.478 [2024-05-16 18:39:17.972284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:17.976468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:17.976502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:17.976515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:17.980705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:17.980749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:17.980762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:17.985335] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:17.985369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:17.985382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:17.989740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:17.989775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:17.989788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:17.994326] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:17.994357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:17.994370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:17.999047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:17.999089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:17.999102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:18.003764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:18.003798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:18.003811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:18.008373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:18.008407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.738 [2024-05-16 18:39:18.008420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.738 [2024-05-16 18:39:18.012867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.738 [2024-05-16 18:39:18.012942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.012970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.017337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.017385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.017397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.021591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.021654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.021666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.025623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.025669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.025681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.029778] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.029825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.029864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.034018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.034074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.034087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.038267] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.038312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.038324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.042650] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.042698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.042711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.047067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.047112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.047123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.051307] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.051341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.051354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.055966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.056011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.056022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.060373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.060413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.060427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.064730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.064764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.064776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.069250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.069296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.069307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.073526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.073572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.073583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.077883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.077945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.082434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.082480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.082491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.086759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.086793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.086806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.091128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.091183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.091214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.095327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.095374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.095386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.099789] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.099846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.099860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.104131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.104175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.104187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.108404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.108450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.108462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.112945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.113021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.113032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.117380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.117425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.117437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.121745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.121778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.121791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.126183] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.126243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.126255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.130453] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.739 [2024-05-16 18:39:18.130498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.739 [2024-05-16 18:39:18.130509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.739 [2024-05-16 18:39:18.135036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.135081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.135092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.139322] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.139356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.139368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.143742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.143775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.143787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.148475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.148522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.148533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.153268] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.153313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.153325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.157662] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.157711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.157723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.162058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.162102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.162114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.166425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.166471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.166482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.170719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.170766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.170778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.175125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.175171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.175209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.179263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.179309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.179321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.183306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.183338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.183350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.187417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.187449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.187462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.191863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.191920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.191947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.196121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.196165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.196177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.200341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.200387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.200414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.204718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.204765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.204776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.209371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.209415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.209426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.213972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.214031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.214043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.218206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.218251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.218262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.222442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.222488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.222499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.226691] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.226738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.226750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.231089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.231134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.231145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.740 [2024-05-16 18:39:18.235663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:04.740 [2024-05-16 18:39:18.235697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.740 [2024-05-16 18:39:18.235710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.240380] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.240427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.240438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.244868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.244911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.244925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.249362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.249408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.249420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.253936] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.253967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.253980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.258169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.258201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.258214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.262414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.262462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.262475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.266729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.266762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.266774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.271090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.271134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.271146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.275493] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.275565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.275594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.280184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.280228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.280239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.284556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.284618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.284631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.289189] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.289233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.289244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.293561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.293610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.293627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.298147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.298190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.298202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.302477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.302521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.302533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.306769] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.306816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.306827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.311215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.311247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.311260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.315762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.315807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.315818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.319941] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.001 [2024-05-16 18:39:18.319985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.001 [2024-05-16 18:39:18.319997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.001 [2024-05-16 18:39:18.324228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.324289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.324301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.328570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.328603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.328616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.332999] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.333042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.333054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.337247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.337309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.337321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.341716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.341763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.341776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.346210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.346255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.346284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.350526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.350571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.350599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.355024] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.355052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.355063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.359391] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.359424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.359438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.364105] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.364148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.364159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.368441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.368487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.368498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.373617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.373654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.373668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.378084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.378128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.378140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.382279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.382325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.382337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.386874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.386917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.386930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.391339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.391385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.395770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.395804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.395816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.400295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.400340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.400352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.404709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.404757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.404770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.409185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.409231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.409243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.413630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.413664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.413676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.418155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.418185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.418196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.422563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.422627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.422640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.427096] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.427152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.427163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.002 [2024-05-16 18:39:18.431634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x78e650) 00:18:05.002 [2024-05-16 18:39:18.431668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.002 [2024-05-16 18:39:18.431680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.002 00:18:05.002 Latency(us) 00:18:05.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.002 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:05.002 nvme0n1 : 2.00 6996.46 874.56 0.00 0.00 2283.01 1772.45 11081.54 00:18:05.002 =================================================================================================================== 00:18:05.002 Total : 6996.46 874.56 0.00 0.00 2283.01 1772.45 11081.54 00:18:05.002 0 00:18:05.002 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:05.002 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:05.002 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:05.002 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:05.002 | .driver_specific 00:18:05.002 | .nvme_error 00:18:05.002 | .status_code 00:18:05.003 | .command_transient_transport_error' 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 451 > 0 )) 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80587 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 80587 ']' 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 80587 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80587 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:05.261 killing process with pid 80587 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80587' 00:18:05.261 Received shutdown signal, test time was about 2.000000 seconds 00:18:05.261 00:18:05.261 Latency(us) 00:18:05.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.261 =================================================================================================================== 00:18:05.261 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 80587 00:18:05.261 18:39:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 80587 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80653 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80653 /var/tmp/bperf.sock 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 80653 ']' 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.828 18:39:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.828 [2024-05-16 18:39:19.100893] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:05.828 [2024-05-16 18:39:19.101022] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80653 ] 00:18:05.828 [2024-05-16 18:39:19.241437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.101 [2024-05-16 18:39:19.379963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.101 [2024-05-16 18:39:19.441480] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:06.676 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:06.676 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:18:06.676 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.676 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:06.935 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:06.935 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.935 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.935 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.935 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:06.935 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.500 nvme0n1 00:18:07.500 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:07.500 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.500 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.500 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.500 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:07.500 18:39:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:07.500 Running I/O for 2 seconds... 00:18:07.500 [2024-05-16 18:39:20.920793] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fef90 00:18:07.500 [2024-05-16 18:39:20.923514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.500 [2024-05-16 18:39:20.923613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:07.500 [2024-05-16 18:39:20.937612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190feb58 00:18:07.500 [2024-05-16 18:39:20.940312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.500 [2024-05-16 18:39:20.940358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:07.500 [2024-05-16 18:39:20.954755] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fe2e8 00:18:07.500 [2024-05-16 18:39:20.957437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.500 [2024-05-16 18:39:20.957481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:07.500 [2024-05-16 18:39:20.971424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fda78 00:18:07.500 [2024-05-16 18:39:20.974028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.500 [2024-05-16 18:39:20.974072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:07.500 [2024-05-16 18:39:20.987530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fd208 00:18:07.500 [2024-05-16 18:39:20.990269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.500 [2024-05-16 18:39:20.990319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.005338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fc998 00:18:07.759 [2024-05-16 18:39:21.007990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.008037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.022512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fc128 00:18:07.759 [2024-05-16 18:39:21.025117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.025166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.039253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fb8b8 00:18:07.759 [2024-05-16 18:39:21.041632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.041668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.056011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fb048 00:18:07.759 [2024-05-16 18:39:21.058466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.058516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.072816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fa7d8 00:18:07.759 [2024-05-16 18:39:21.075405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.075438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.089952] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f9f68 00:18:07.759 [2024-05-16 18:39:21.092285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.092318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.106437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f96f8 00:18:07.759 [2024-05-16 18:39:21.108799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.108842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.123031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f8e88 00:18:07.759 [2024-05-16 18:39:21.125393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.125426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.139695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f8618 00:18:07.759 [2024-05-16 18:39:21.141980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.142011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.156411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f7da8 00:18:07.759 [2024-05-16 18:39:21.158786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.759 [2024-05-16 18:39:21.158816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:07.759 [2024-05-16 18:39:21.172762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f7538 00:18:07.760 [2024-05-16 18:39:21.175032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.760 [2024-05-16 18:39:21.175079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:07.760 [2024-05-16 18:39:21.189067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f6cc8 00:18:07.760 [2024-05-16 18:39:21.191364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.760 [2024-05-16 18:39:21.191396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:07.760 [2024-05-16 18:39:21.205819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f6458 00:18:07.760 [2024-05-16 18:39:21.208272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.760 [2024-05-16 18:39:21.208300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:07.760 [2024-05-16 18:39:21.222801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f5be8 00:18:07.760 [2024-05-16 18:39:21.225083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.760 [2024-05-16 18:39:21.225115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:07.760 [2024-05-16 18:39:21.239782] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f5378 00:18:07.760 [2024-05-16 18:39:21.242085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.760 [2024-05-16 18:39:21.242135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:07.760 [2024-05-16 18:39:21.256957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f4b08 00:18:07.760 [2024-05-16 18:39:21.259149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:07.760 [2024-05-16 18:39:21.259213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.273495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f4298 00:18:08.018 [2024-05-16 18:39:21.275739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.275765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.289920] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f3a28 00:18:08.018 [2024-05-16 18:39:21.292094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.292141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.306792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f31b8 00:18:08.018 [2024-05-16 18:39:21.309019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.309066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.323701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f2948 00:18:08.018 [2024-05-16 18:39:21.325814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.325852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.340456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f20d8 00:18:08.018 [2024-05-16 18:39:21.342606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.342637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.357112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f1868 00:18:08.018 [2024-05-16 18:39:21.359372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.359404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.374082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f0ff8 00:18:08.018 [2024-05-16 18:39:21.376203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.376248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.391293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f0788 00:18:08.018 [2024-05-16 18:39:21.393431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.393460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.408027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eff18 00:18:08.018 [2024-05-16 18:39:21.410060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.410091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.424632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ef6a8 00:18:08.018 [2024-05-16 18:39:21.426698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.426728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.440826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eee38 00:18:08.018 [2024-05-16 18:39:21.442786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.442832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.457185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ee5c8 00:18:08.018 [2024-05-16 18:39:21.459262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.459293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.474378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190edd58 00:18:08.018 [2024-05-16 18:39:21.476341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.476385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.490792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ed4e8 00:18:08.018 [2024-05-16 18:39:21.492680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.492711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.018 [2024-05-16 18:39:21.507115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ecc78 00:18:08.018 [2024-05-16 18:39:21.509035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.018 [2024-05-16 18:39:21.509087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.524251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ec408 00:18:08.277 [2024-05-16 18:39:21.526127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.540926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ebb98 00:18:08.277 [2024-05-16 18:39:21.543006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.543057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.557402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eb328 00:18:08.277 [2024-05-16 18:39:21.559361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.559392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.573542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eaab8 00:18:08.277 [2024-05-16 18:39:21.575549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.575609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.590178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ea248 00:18:08.277 [2024-05-16 18:39:21.592098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.592142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.607093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e99d8 00:18:08.277 [2024-05-16 18:39:21.608875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.608959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.623610] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e9168 00:18:08.277 [2024-05-16 18:39:21.625450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.625493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.640125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e88f8 00:18:08.277 [2024-05-16 18:39:21.641799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.641839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.277 [2024-05-16 18:39:21.656260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e8088 00:18:08.277 [2024-05-16 18:39:21.657930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.277 [2024-05-16 18:39:21.657995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.278 [2024-05-16 18:39:21.672389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e7818 00:18:08.278 [2024-05-16 18:39:21.674092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.278 [2024-05-16 18:39:21.674135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.278 [2024-05-16 18:39:21.688555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e6fa8 00:18:08.278 [2024-05-16 18:39:21.690275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.278 [2024-05-16 18:39:21.690318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.278 [2024-05-16 18:39:21.705168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e6738 00:18:08.278 [2024-05-16 18:39:21.706849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.278 [2024-05-16 18:39:21.706887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.278 [2024-05-16 18:39:21.721737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e5ec8 00:18:08.278 [2024-05-16 18:39:21.723339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.278 [2024-05-16 18:39:21.723370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.278 [2024-05-16 18:39:21.738330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e5658 00:18:08.278 [2024-05-16 18:39:21.739904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.278 [2024-05-16 18:39:21.739938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:08.278 [2024-05-16 18:39:21.754765] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e4de8 00:18:08.278 [2024-05-16 18:39:21.756391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.278 [2024-05-16 18:39:21.756423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:08.278 [2024-05-16 18:39:21.772085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e4578 00:18:08.278 [2024-05-16 18:39:21.773762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.278 [2024-05-16 18:39:21.773793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.789649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e3d08 00:18:08.537 [2024-05-16 18:39:21.791316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.791350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.807021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e3498 00:18:08.537 [2024-05-16 18:39:21.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.808589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.823960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e2c28 00:18:08.537 [2024-05-16 18:39:21.825559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.825625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.841223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e23b8 00:18:08.537 [2024-05-16 18:39:21.842699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.842732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.857955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e1b48 00:18:08.537 [2024-05-16 18:39:21.859430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.859463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.874226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e12d8 00:18:08.537 [2024-05-16 18:39:21.875684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.875716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.890690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e0a68 00:18:08.537 [2024-05-16 18:39:21.892101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.892132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.907247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e01f8 00:18:08.537 [2024-05-16 18:39:21.908750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.908783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.923887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190df988 00:18:08.537 [2024-05-16 18:39:21.925295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.925334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.940818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190df118 00:18:08.537 [2024-05-16 18:39:21.942198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.942246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.957275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190de8a8 00:18:08.537 [2024-05-16 18:39:21.958621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.958652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.973772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190de038 00:18:08.537 [2024-05-16 18:39:21.975119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:21.975164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:21.997356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190de038 00:18:08.537 [2024-05-16 18:39:22.000109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:22.000155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:22.013291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190de8a8 00:18:08.537 [2024-05-16 18:39:22.015920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:22.015991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.537 [2024-05-16 18:39:22.029236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190df118 00:18:08.537 [2024-05-16 18:39:22.031972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.537 [2024-05-16 18:39:22.032016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.046335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190df988 00:18:08.797 [2024-05-16 18:39:22.048985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.049034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.063242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e01f8 00:18:08.797 [2024-05-16 18:39:22.065721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.065754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.079749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e0a68 00:18:08.797 [2024-05-16 18:39:22.082194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.082229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.096806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e12d8 00:18:08.797 [2024-05-16 18:39:22.099364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.099396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.113683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e1b48 00:18:08.797 [2024-05-16 18:39:22.116219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.116264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.130503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e23b8 00:18:08.797 [2024-05-16 18:39:22.133150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.133194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.147329] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e2c28 00:18:08.797 [2024-05-16 18:39:22.149715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.149761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.163651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e3498 00:18:08.797 [2024-05-16 18:39:22.165934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.165976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.179729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e3d08 00:18:08.797 [2024-05-16 18:39:22.182181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.182224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.195802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e4578 00:18:08.797 [2024-05-16 18:39:22.198035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.198079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.211751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e4de8 00:18:08.797 [2024-05-16 18:39:22.214124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.214168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.228027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e5658 00:18:08.797 [2024-05-16 18:39:22.230327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.797 [2024-05-16 18:39:22.230380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.797 [2024-05-16 18:39:22.244917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e5ec8 00:18:08.797 [2024-05-16 18:39:22.247303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.798 [2024-05-16 18:39:22.247334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.798 [2024-05-16 18:39:22.261317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e6738 00:18:08.798 [2024-05-16 18:39:22.263602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.798 [2024-05-16 18:39:22.263635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.798 [2024-05-16 18:39:22.278013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e6fa8 00:18:08.798 [2024-05-16 18:39:22.280293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.798 [2024-05-16 18:39:22.280336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.798 [2024-05-16 18:39:22.294412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e7818 00:18:08.798 [2024-05-16 18:39:22.296798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.798 [2024-05-16 18:39:22.296841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.310172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e8088 00:18:09.058 [2024-05-16 18:39:22.312304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.312351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.325574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e88f8 00:18:09.058 [2024-05-16 18:39:22.327882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.327948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.342749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e9168 00:18:09.058 [2024-05-16 18:39:22.344932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.345023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.359047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190e99d8 00:18:09.058 [2024-05-16 18:39:22.361199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.361246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.375426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ea248 00:18:09.058 [2024-05-16 18:39:22.377513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.377556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.391718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eaab8 00:18:09.058 [2024-05-16 18:39:22.393865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.393903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.408115] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eb328 00:18:09.058 [2024-05-16 18:39:22.410173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.410206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.424357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ebb98 00:18:09.058 [2024-05-16 18:39:22.426530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.426585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.440998] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ec408 00:18:09.058 [2024-05-16 18:39:22.443077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.443124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.457524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ecc78 00:18:09.058 [2024-05-16 18:39:22.459587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.459650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.474475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ed4e8 00:18:09.058 [2024-05-16 18:39:22.476507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.476554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.491310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190edd58 00:18:09.058 [2024-05-16 18:39:22.493238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.493286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.507752] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ee5c8 00:18:09.058 [2024-05-16 18:39:22.509801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.509840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.524138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eee38 00:18:09.058 [2024-05-16 18:39:22.526051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.526082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.540671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190ef6a8 00:18:09.058 [2024-05-16 18:39:22.542559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.058 [2024-05-16 18:39:22.542594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:09.058 [2024-05-16 18:39:22.557303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190eff18 00:18:09.318 [2024-05-16 18:39:22.559237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.318 [2024-05-16 18:39:22.559279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:09.318 [2024-05-16 18:39:22.573912] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f0788 00:18:09.318 [2024-05-16 18:39:22.575840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.318 [2024-05-16 18:39:22.575928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:09.318 [2024-05-16 18:39:22.590726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f0ff8 00:18:09.318 [2024-05-16 18:39:22.592609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.318 [2024-05-16 18:39:22.592659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:09.318 [2024-05-16 18:39:22.607485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f1868 00:18:09.318 [2024-05-16 18:39:22.609324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.318 [2024-05-16 18:39:22.609373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:09.318 [2024-05-16 18:39:22.623725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f20d8 00:18:09.318 [2024-05-16 18:39:22.625531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.318 [2024-05-16 18:39:22.625563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:09.318 [2024-05-16 18:39:22.640224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f2948 00:18:09.319 [2024-05-16 18:39:22.641979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.642012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.656651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f31b8 00:18:09.319 [2024-05-16 18:39:22.658466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.658515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.673139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f3a28 00:18:09.319 [2024-05-16 18:39:22.674888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.674928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.689819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f4298 00:18:09.319 [2024-05-16 18:39:22.691565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.691598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.706222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f4b08 00:18:09.319 [2024-05-16 18:39:22.708047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.708093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.722927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f5378 00:18:09.319 [2024-05-16 18:39:22.724667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.724699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.739262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f5be8 00:18:09.319 [2024-05-16 18:39:22.740886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.740946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.755754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f6458 00:18:09.319 [2024-05-16 18:39:22.757419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.757451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.772305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f6cc8 00:18:09.319 [2024-05-16 18:39:22.773994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.774042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.788958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f7538 00:18:09.319 [2024-05-16 18:39:22.790612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.790644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:09.319 [2024-05-16 18:39:22.805566] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f7da8 00:18:09.319 [2024-05-16 18:39:22.807270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.319 [2024-05-16 18:39:22.807300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:09.578 [2024-05-16 18:39:22.822348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f8618 00:18:09.578 [2024-05-16 18:39:22.823988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.578 [2024-05-16 18:39:22.824036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:09.578 [2024-05-16 18:39:22.838659] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f8e88 00:18:09.578 [2024-05-16 18:39:22.840329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.578 [2024-05-16 18:39:22.840373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:09.578 [2024-05-16 18:39:22.855406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f96f8 00:18:09.578 [2024-05-16 18:39:22.857042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.578 [2024-05-16 18:39:22.857076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:09.578 [2024-05-16 18:39:22.871341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190f9f68 00:18:09.578 [2024-05-16 18:39:22.872981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.578 [2024-05-16 18:39:22.873013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:09.578 [2024-05-16 18:39:22.887567] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fa7d8 00:18:09.578 [2024-05-16 18:39:22.889296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.578 [2024-05-16 18:39:22.889344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:09.578 [2024-05-16 18:39:22.904415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1faf930) with pdu=0x2000190fb048 00:18:09.578 [2024-05-16 18:39:22.905944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:09.578 [2024-05-16 18:39:22.905997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:09.578 00:18:09.578 Latency(us) 00:18:09.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.578 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:09.578 nvme0n1 : 2.01 15237.71 59.52 0.00 0.00 8391.47 2949.12 32172.22 00:18:09.578 =================================================================================================================== 00:18:09.578 Total : 15237.71 59.52 0.00 0.00 8391.47 2949.12 32172.22 00:18:09.578 0 00:18:09.578 18:39:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:09.578 18:39:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:09.578 18:39:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:09.578 18:39:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:09.578 | .driver_specific 00:18:09.578 | .nvme_error 00:18:09.578 | .status_code 00:18:09.578 | .command_transient_transport_error' 00:18:09.837 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 120 > 0 )) 00:18:09.837 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80653 00:18:09.837 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 80653 ']' 00:18:09.837 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 80653 00:18:09.837 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:18:09.837 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:09.837 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80653 00:18:09.837 killing process with pid 80653 00:18:09.837 Received shutdown signal, test time was about 2.000000 seconds 00:18:09.837 00:18:09.837 Latency(us) 00:18:09.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.837 =================================================================================================================== 00:18:09.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:09.838 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:09.838 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:09.838 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80653' 00:18:09.838 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 80653 00:18:09.838 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 80653 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80708 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80708 /var/tmp/bperf.sock 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 80708 ']' 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.096 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:10.097 18:39:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.355 Zero copy mechanism will not be used. 00:18:10.355 [2024-05-16 18:39:23.634348] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:10.355 [2024-05-16 18:39:23.634454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80708 ] 00:18:10.355 [2024-05-16 18:39:23.774309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.613 [2024-05-16 18:39:23.927225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.614 [2024-05-16 18:39:24.005686] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:11.180 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:11.180 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:18:11.180 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:11.180 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:11.439 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:11.439 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.439 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:11.440 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.440 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.440 18:39:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.698 nvme0n1 00:18:11.959 18:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:11.959 18:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.959 18:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:11.959 18:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.959 18:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:11.959 18:39:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:11.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:11.959 Zero copy mechanism will not be used. 00:18:11.959 Running I/O for 2 seconds... 00:18:11.959 [2024-05-16 18:39:25.351098] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.351470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.351518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.357092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.357408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.357438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.362720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.363088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.363117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.368446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.368788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.368830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.374222] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.374529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.374558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.379879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.380211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.380257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.385334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.385416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.385440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.391023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.391106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.391130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.396847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.396965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.397003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.402395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.402476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.402499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.408116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.408195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.408218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.413376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.413453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.413476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.418845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.418980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.419036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.424412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.424478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.424499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.429848] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.429934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.429973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.435577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.435674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.435697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.441212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.441293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.441317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.446721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.446793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.446818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.452266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.452346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.452370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:11.959 [2024-05-16 18:39:25.458140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:11.959 [2024-05-16 18:39:25.458207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:11.959 [2024-05-16 18:39:25.458244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.220 [2024-05-16 18:39:25.463810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.220 [2024-05-16 18:39:25.463909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-05-16 18:39:25.463962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.220 [2024-05-16 18:39:25.469003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.220 [2024-05-16 18:39:25.469087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-05-16 18:39:25.469110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.220 [2024-05-16 18:39:25.474247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.220 [2024-05-16 18:39:25.474330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-05-16 18:39:25.474368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.220 [2024-05-16 18:39:25.480001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.220 [2024-05-16 18:39:25.480083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.220 [2024-05-16 18:39:25.480105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.485270] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.485353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.485375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.490863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.491024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.491079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.496766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.496880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.496907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.502380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.502475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.502499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.507990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.508091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.508144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.513399] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.513477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.513499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.518797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.518947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.518991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.524205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.524301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.524325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.529471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.529549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.529594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.534857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.535001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.535024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.540316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.540378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.540399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.545699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.545769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.545792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.551338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.551413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.551446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.556938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.557064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.557106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.562625] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.562698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.562723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.567995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.568074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.568096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.573458] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.573539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.573564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.578882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.579025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.579048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.584350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.584429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.584453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.589901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.590033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.590056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.595337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.595408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.595433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.600726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.600794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.600817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.606321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.606406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.606430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.611611] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.611706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.611728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.616762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.616844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.616895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.621829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.621947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.622024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.627113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.627224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.627251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.632460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.221 [2024-05-16 18:39:25.632541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.221 [2024-05-16 18:39:25.632582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.221 [2024-05-16 18:39:25.638057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.638145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.638172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.643416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.643488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.643542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.649001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.649084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.649106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.654424] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.654503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.654525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.660183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.660283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.660305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.665768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.665858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.665883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.671172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.671298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.671322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.676656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.676765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.676799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.681881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.682036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.682061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.687353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.687425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.687449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.692859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.692986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.693025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.698259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.698339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.698361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.703719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.703791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.703816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.709445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.709528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.709550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.222 [2024-05-16 18:39:25.715217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.222 [2024-05-16 18:39:25.715453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.222 [2024-05-16 18:39:25.715612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.721182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.721494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.721698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.727240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.727463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.727788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.732661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.732905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.733063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.738353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.738623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.738793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.743956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.744287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.744519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.749736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.750037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.750243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.755243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.755471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.755676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.760809] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.761094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.761298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.766497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.766738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.767001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.772185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.772474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.772669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.777821] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.778149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.778401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.783276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.783485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.783509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.789248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.789325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.789351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.794886] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.795216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.795255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.800378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.800457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.800480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.805869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.805972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.806012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.811537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.811639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.811665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.817347] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.817435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.817460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.822854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.822950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.822976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.828312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.828397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.828419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.833778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.833884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.833909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.839308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.839383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.839407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.844658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.844728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.844751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.850126] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.850208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.850228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.855692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.855767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.855790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.861214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.861406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.861430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.866439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.866530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.866553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.872026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.872120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.872143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.877509] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.482 [2024-05-16 18:39:25.877635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.482 [2024-05-16 18:39:25.877661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.482 [2024-05-16 18:39:25.883044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.883139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.883162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.888452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.888560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.888616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.894053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.894136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.894158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.899303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.899373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.899396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.904808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.904945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.904997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.910194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.910320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.910341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.915545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.915625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.915648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.920914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.921012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.921035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.926314] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.926401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.926423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.932146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.932268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.932291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.937703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.937880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.937919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.943209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.943297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.943324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.949007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.949109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.949134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.954607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.954817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.954866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.960214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.960321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.960346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.965878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.966007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.966045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.971464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.971578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.971617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.483 [2024-05-16 18:39:25.977256] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.483 [2024-05-16 18:39:25.977345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.483 [2024-05-16 18:39:25.977368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:25.982999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:25.983136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:25.983160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:25.988530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:25.988680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:25.988711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:25.994286] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:25.994400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:25.994423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:25.999759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:25.999844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:25.999876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.004849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.005188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.005233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.010505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.010849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.010891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.016067] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.016398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.016429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.021799] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.022155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.022189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.027380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.027715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.033032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.033336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.033365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.038427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.038732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.038760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.044003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.044329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.044357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.049608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.049920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.049954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.055154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.055483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.055512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.060684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.060997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.061030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.066002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.066297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.066325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.071548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.071860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.071889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.077185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.077529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.077572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.083052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.083390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.083418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.088758] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.089158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.089210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.094368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.094688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.094722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.099713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.099789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.099820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.105331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.105400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.105423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.111103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.111214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.743 [2024-05-16 18:39:26.111240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.743 [2024-05-16 18:39:26.116663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.743 [2024-05-16 18:39:26.116738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.116764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.122232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.122314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.122338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.127770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.127900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.127983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.133162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.133263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.133289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.138564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.138642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.138668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.144041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.144120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.144144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.149431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.149527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.149550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.154932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.155000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.155025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.160398] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.160471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.160495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.165671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.165748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.165771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.170909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.170990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.171013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.176271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.176342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.176365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.181767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.181876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.181900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.187338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.187406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.187430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.192790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.192889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.192926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.198186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.198280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.198302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.203760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.203876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.203899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.209357] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.209445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.209468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.214904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.214998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.215021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.220443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.220527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.220548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.225797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.225910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.225932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.231079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.231165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.231213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.236529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.236640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.236674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.744 [2024-05-16 18:39:26.242334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:12.744 [2024-05-16 18:39:26.242449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.744 [2024-05-16 18:39:26.242480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.003 [2024-05-16 18:39:26.247841] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.003 [2024-05-16 18:39:26.248020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.003 [2024-05-16 18:39:26.248047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.003 [2024-05-16 18:39:26.253225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.003 [2024-05-16 18:39:26.253317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.003 [2024-05-16 18:39:26.253341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.258720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.258797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.258822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.264218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.264299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.264323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.269639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.269725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.269750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.275220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.275324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.275355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.280590] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.280677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.280702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.285982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.286068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.286093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.291366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.291460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.291485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.296715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.296788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.296813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.302075] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.302188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.302228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.307499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.307630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.307654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.312978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.313076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.313100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.318315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.318426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.318450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.323766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.323876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.323901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.329218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.329340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.329364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.334688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.334780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.334809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.340308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.340428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.340451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.346053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.346156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.346179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.351609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.351735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.351759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.357117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.357200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.357237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.362669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.362781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.362803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.368123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.368222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.368244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.373426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.373590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.373623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.379030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.379118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.379157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.384596] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.384673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.384697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.390223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.390340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.395729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.395829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.395853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.401284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.401413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.401437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.004 [2024-05-16 18:39:26.406725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.004 [2024-05-16 18:39:26.406820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.004 [2024-05-16 18:39:26.406847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.412275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.412388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.412412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.417728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.417830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.417857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.423233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.423309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.423335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.428657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.428737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.428762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.434406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.434522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.434546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.440001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.440087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.440111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.445461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.445558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.445606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.450860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.450963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.450986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.456253] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.456387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.456409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.461727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.461797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.461823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.467054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.467172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.467225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.472588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.472672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.472698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.478056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.478155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.478177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.483406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.483478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.483503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.488792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.488932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.488985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.494208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.494330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.005 [2024-05-16 18:39:26.499582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.005 [2024-05-16 18:39:26.499696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.005 [2024-05-16 18:39:26.499719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.505454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.505589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.505621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.511258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.511344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.511370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.516645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.516829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.516854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.521909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.522068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.522091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.527425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.527510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.527535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.532917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.533035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.533059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.538293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.538438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.538460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.543852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.543979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.544002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.549330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.549435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.549458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.554816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.554926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.554950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.560274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.560398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.560419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.566079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.566255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.264 [2024-05-16 18:39:26.566278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.264 [2024-05-16 18:39:26.571531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.264 [2024-05-16 18:39:26.571627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.571661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.576072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.576244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.576277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.581480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.581632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.581666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.586884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.587008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.587040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.592331] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.592429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.592454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.597623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.597696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.597721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.602967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.603093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.603117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.608544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.608651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.608687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.614053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.614170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.614194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.619361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.619456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.619489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.624867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.624982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.625020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.630085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.630196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.630219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.635589] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.635714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.635737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.640860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.641001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.641024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.646219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.646315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.646339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.651393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.651463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.651486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.656715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.656816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.656840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.662240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.662332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.662354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.667715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.667788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.667811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.673167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.673294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.673316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.678717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.678788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.678815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.684288] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.684370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.684392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.689640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.689711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.689734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.695081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.695256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.695278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.700657] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.700730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.700760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.706153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.706249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.706270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.711456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.711524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.711559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.717220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.717320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.265 [2024-05-16 18:39:26.717341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.265 [2024-05-16 18:39:26.722827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.265 [2024-05-16 18:39:26.722963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.723015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.266 [2024-05-16 18:39:26.728452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.266 [2024-05-16 18:39:26.728537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.728559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.266 [2024-05-16 18:39:26.733850] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.266 [2024-05-16 18:39:26.733954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.733991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.266 [2024-05-16 18:39:26.739468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.266 [2024-05-16 18:39:26.739539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.739561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.266 [2024-05-16 18:39:26.745248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.266 [2024-05-16 18:39:26.745383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.745406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.266 [2024-05-16 18:39:26.751173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.266 [2024-05-16 18:39:26.751271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.751294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.266 [2024-05-16 18:39:26.756739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.266 [2024-05-16 18:39:26.756810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.756834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.266 [2024-05-16 18:39:26.762259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.266 [2024-05-16 18:39:26.762389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.266 [2024-05-16 18:39:26.762421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.525 [2024-05-16 18:39:26.768358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.525 [2024-05-16 18:39:26.768445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.525 [2024-05-16 18:39:26.768468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.525 [2024-05-16 18:39:26.774180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.525 [2024-05-16 18:39:26.774292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.525 [2024-05-16 18:39:26.774325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.525 [2024-05-16 18:39:26.779726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.525 [2024-05-16 18:39:26.779824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.525 [2024-05-16 18:39:26.779864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.525 [2024-05-16 18:39:26.785119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.785238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.785259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.790947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.791025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.791045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.796359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.796463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.796484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.801866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.802161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.802202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.807616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.807690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.807717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.813124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.813237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.813263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.818801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.818889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.818929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.824312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.824406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.824431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.829997] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.830119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.830143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.835707] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.835782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.835814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.841103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.841201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.841226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.846698] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.846771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.846796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.852299] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.852385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.852410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.858061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.858159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.858183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.863776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.863876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.863900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.869454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.869593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.869623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.875343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.875439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.875464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.881218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.881353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.881377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.887048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.887144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.887166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.892778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.892863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.892888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.898815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.898929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.898954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.904637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.904728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.904753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.910228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.910372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.910396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.915774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.915886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.915924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.921305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.921406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.921428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.926677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.926775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.926797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.932074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.932178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.932201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.937242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.526 [2024-05-16 18:39:26.937337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.526 [2024-05-16 18:39:26.937358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.526 [2024-05-16 18:39:26.942388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.942508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.942544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.947619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.947767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.947792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.953210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.953315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.953338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.958697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.958782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.958805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.964218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.964371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.964410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.969816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.969926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.969950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.975234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.975330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.975356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.980741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.980810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.980836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.986292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.986380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.991869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.992026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.992049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:26.997384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:26.997507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:26.997530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:27.002753] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:27.002827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:27.002851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:27.008339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:27.008441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:27.008466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:27.013854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:27.013973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:27.014011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:27.019348] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:27.019440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:27.019463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.527 [2024-05-16 18:39:27.025125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.527 [2024-05-16 18:39:27.025235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.527 [2024-05-16 18:39:27.025257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.031020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.031134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.031159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.036674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.036746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.036771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.042110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.042264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.042320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.047754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.047839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.047864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.053369] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.053484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.053508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.058960] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.059079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.059101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.064350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.064449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.064472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.069602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.069725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.069748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.075233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.075330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.075355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.080630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.080724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.080748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.086118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.086222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.086246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.091656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.787 [2024-05-16 18:39:27.091752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.787 [2024-05-16 18:39:27.091777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.787 [2024-05-16 18:39:27.097282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.097390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.097414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.102772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.102903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.102943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.108464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.108719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.108758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.114068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.114210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.114234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.119422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.119534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.119591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.124970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.125080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.125102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.130418] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.130543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.130584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.136131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.136268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.136291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.141655] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.141826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.141864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.147278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.147370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.147394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.152953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.153062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.153085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.158391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.158499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.158521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.163896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.164028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.164050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.169402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.169478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.174863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.174987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.175009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.180453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.180530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.180552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.186020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.186128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.186151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.191446] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.191551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.191584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.196867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.197072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.197095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.202403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.202716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.202750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.208068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.208405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.208438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.214082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.214478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.214512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.220049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.220460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.220507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.225656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.225737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.225773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.231205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.231287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.788 [2024-05-16 18:39:27.231313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.788 [2024-05-16 18:39:27.236543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.788 [2024-05-16 18:39:27.236642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.236668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.242074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.242172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.242196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.247695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.247780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.247804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.253131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.253256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.253279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.258584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.258699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.258726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.264389] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.264486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.264508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.270133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.270264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.270294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.275714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.275789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.275814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.281310] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:13.789 [2024-05-16 18:39:27.281404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.789 [2024-05-16 18:39:27.281429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.789 [2024-05-16 18:39:27.287343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.047 [2024-05-16 18:39:27.287427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.047 [2024-05-16 18:39:27.287460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.047 [2024-05-16 18:39:27.292936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.047 [2024-05-16 18:39:27.293077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.047 [2024-05-16 18:39:27.293101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.047 [2024-05-16 18:39:27.298606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.047 [2024-05-16 18:39:27.298682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.047 [2024-05-16 18:39:27.298707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.047 [2024-05-16 18:39:27.304165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.047 [2024-05-16 18:39:27.304275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.047 [2024-05-16 18:39:27.304297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.047 [2024-05-16 18:39:27.309715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.047 [2024-05-16 18:39:27.309789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.047 [2024-05-16 18:39:27.309814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.047 [2024-05-16 18:39:27.315079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.047 [2024-05-16 18:39:27.315185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.047 [2024-05-16 18:39:27.315226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.047 [2024-05-16 18:39:27.320467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.047 [2024-05-16 18:39:27.320551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.047 [2024-05-16 18:39:27.320597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.048 [2024-05-16 18:39:27.325988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.048 [2024-05-16 18:39:27.326107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.048 [2024-05-16 18:39:27.326141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.048 [2024-05-16 18:39:27.331574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.048 [2024-05-16 18:39:27.331661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.048 [2024-05-16 18:39:27.331686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.048 [2024-05-16 18:39:27.337102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.048 [2024-05-16 18:39:27.337180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.048 [2024-05-16 18:39:27.337219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.048 [2024-05-16 18:39:27.342452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1fafad0) with pdu=0x2000190fef90 00:18:14.048 [2024-05-16 18:39:27.342531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.048 [2024-05-16 18:39:27.342554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.048 00:18:14.048 Latency(us) 00:18:14.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.048 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:14.048 nvme0n1 : 2.00 5592.11 699.01 0.00 0.00 2853.61 2025.66 7477.06 00:18:14.048 =================================================================================================================== 00:18:14.048 Total : 5592.11 699.01 0.00 0.00 2853.61 2025.66 7477.06 00:18:14.048 0 00:18:14.048 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:14.048 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:14.048 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:14.048 | .driver_specific 00:18:14.048 | .nvme_error 00:18:14.048 | .status_code 00:18:14.048 | .command_transient_transport_error' 00:18:14.048 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 361 > 0 )) 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80708 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 80708 ']' 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 80708 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80708 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:14.307 killing process with pid 80708 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80708' 00:18:14.307 Received shutdown signal, test time was about 2.000000 seconds 00:18:14.307 00:18:14.307 Latency(us) 00:18:14.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.307 =================================================================================================================== 00:18:14.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 80708 00:18:14.307 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 80708 00:18:14.566 18:39:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80499 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 80499 ']' 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 80499 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80499 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:14.566 killing process with pid 80499 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80499' 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 80499 00:18:14.566 [2024-05-16 18:39:28.026229] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:14.566 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 80499 00:18:15.133 00:18:15.133 real 0m19.345s 00:18:15.133 user 0m37.283s 00:18:15.133 sys 0m5.060s 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.133 ************************************ 00:18:15.133 END TEST nvmf_digest_error 00:18:15.133 ************************************ 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.133 rmmod nvme_tcp 00:18:15.133 rmmod nvme_fabrics 00:18:15.133 rmmod nvme_keyring 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80499 ']' 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80499 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 80499 ']' 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 80499 00:18:15.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (80499) - No such process 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 80499 is not found' 00:18:15.133 Process with pid 80499 is not found 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:15.133 00:18:15.133 real 0m38.567s 00:18:15.133 user 1m12.515s 00:18:15.133 sys 0m10.638s 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:15.133 ************************************ 00:18:15.133 END TEST nvmf_digest 00:18:15.133 ************************************ 00:18:15.133 18:39:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:15.133 18:39:28 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:18:15.133 18:39:28 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:18:15.133 18:39:28 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:15.133 18:39:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:15.133 18:39:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:15.133 18:39:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:15.133 ************************************ 00:18:15.133 START TEST nvmf_host_multipath 00:18:15.133 ************************************ 00:18:15.133 18:39:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:15.392 * Looking for test storage... 00:18:15.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:15.392 Cannot find device "nvmf_tgt_br" 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.392 Cannot find device "nvmf_tgt_br2" 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:15.392 Cannot find device "nvmf_tgt_br" 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:15.392 Cannot find device "nvmf_tgt_br2" 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.392 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.392 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:15.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:18:15.658 00:18:15.658 --- 10.0.0.2 ping statistics --- 00:18:15.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.658 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:15.658 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.658 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:18:15.658 00:18:15.658 --- 10.0.0.3 ping statistics --- 00:18:15.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.658 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:15.658 00:18:15.658 --- 10.0.0.1 ping statistics --- 00:18:15.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.658 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.658 18:39:28 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:15.658 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:15.658 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.658 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:15.658 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:15.658 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=80975 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 80975 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 80975 ']' 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:15.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:15.659 18:39:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:15.659 [2024-05-16 18:39:29.091107] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:18:15.659 [2024-05-16 18:39:29.091235] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.934 [2024-05-16 18:39:29.234494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:15.934 [2024-05-16 18:39:29.376795] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.934 [2024-05-16 18:39:29.376894] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.934 [2024-05-16 18:39:29.376907] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.934 [2024-05-16 18:39:29.376930] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.934 [2024-05-16 18:39:29.376937] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.934 [2024-05-16 18:39:29.377134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.934 [2024-05-16 18:39:29.377560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.191 [2024-05-16 18:39:29.458203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80975 00:18:16.755 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:17.013 [2024-05-16 18:39:30.348993] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.013 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:17.270 Malloc0 00:18:17.270 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:17.526 18:39:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.784 18:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:18.042 [2024-05-16 18:39:31.362630] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:18.042 [2024-05-16 18:39:31.362978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.042 18:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:18.300 [2024-05-16 18:39:31.586905] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81031 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81031 /var/tmp/bdevperf.sock 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 81031 ']' 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:18.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:18.300 18:39:31 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:19.234 18:39:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:19.234 18:39:32 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:18:19.234 18:39:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:19.492 18:39:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:20.057 Nvme0n1 00:18:20.057 18:39:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:20.316 Nvme0n1 00:18:20.316 18:39:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:20.316 18:39:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:21.251 18:39:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:21.251 18:39:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:21.509 18:39:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:21.767 18:39:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:21.767 18:39:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81076 00:18:21.767 18:39:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:21.767 18:39:35 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:28.327 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:28.327 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:28.327 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:28.327 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.327 Attaching 4 probes... 00:18:28.327 @path[10.0.0.2, 4421]: 15838 00:18:28.327 @path[10.0.0.2, 4421]: 16008 00:18:28.327 @path[10.0.0.2, 4421]: 16595 00:18:28.327 @path[10.0.0.2, 4421]: 16573 00:18:28.327 @path[10.0.0.2, 4421]: 16107 00:18:28.327 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:28.327 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81076 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:28.328 18:39:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:28.586 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:28.586 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81189 00:18:28.586 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:28.586 18:39:42 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.149 Attaching 4 probes... 00:18:35.149 @path[10.0.0.2, 4420]: 16857 00:18:35.149 @path[10.0.0.2, 4420]: 17240 00:18:35.149 @path[10.0.0.2, 4420]: 17259 00:18:35.149 @path[10.0.0.2, 4420]: 17279 00:18:35.149 @path[10.0.0.2, 4420]: 15402 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81189 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:35.149 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:35.408 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:35.408 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.408 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81307 00:18:35.408 18:39:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:41.969 18:39:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:41.969 18:39:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.969 Attaching 4 probes... 00:18:41.969 @path[10.0.0.2, 4421]: 13070 00:18:41.969 @path[10.0.0.2, 4421]: 16080 00:18:41.969 @path[10.0.0.2, 4421]: 14564 00:18:41.969 @path[10.0.0.2, 4421]: 14609 00:18:41.969 @path[10.0.0.2, 4421]: 17392 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81307 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:41.969 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:42.228 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:42.486 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:42.486 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:42.486 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81423 00:18:42.486 18:39:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:49.052 18:40:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:49.052 18:40:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.052 Attaching 4 probes... 00:18:49.052 00:18:49.052 00:18:49.052 00:18:49.052 00:18:49.052 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81423 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:49.052 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:49.311 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:49.311 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81532 00:18:49.311 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:49.311 18:40:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.906 Attaching 4 probes... 00:18:55.906 @path[10.0.0.2, 4421]: 15511 00:18:55.906 @path[10.0.0.2, 4421]: 15476 00:18:55.906 @path[10.0.0.2, 4421]: 15222 00:18:55.906 @path[10.0.0.2, 4421]: 16164 00:18:55.906 @path[10.0.0.2, 4421]: 17000 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81532 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.906 18:40:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:55.906 [2024-05-16 18:40:09.178945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294120 is same with the state(5) to be set 00:18:55.906 [2024-05-16 18:40:09.179020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294120 is same with the state(5) to be set 00:18:55.906 [2024-05-16 18:40:09.179030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294120 is same with the state(5) to be set 00:18:55.906 [2024-05-16 18:40:09.179039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294120 is same with the state(5) to be set 00:18:55.906 18:40:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:56.843 18:40:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:56.843 18:40:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81656 00:18:56.843 18:40:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:56.843 18:40:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.436 Attaching 4 probes... 00:19:03.436 @path[10.0.0.2, 4420]: 14741 00:19:03.436 @path[10.0.0.2, 4420]: 14831 00:19:03.436 @path[10.0.0.2, 4420]: 15019 00:19:03.436 @path[10.0.0.2, 4420]: 15135 00:19:03.436 @path[10.0.0.2, 4420]: 15280 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81656 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:03.436 [2024-05-16 18:40:16.756802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:03.436 18:40:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:03.695 18:40:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:10.261 18:40:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:10.261 18:40:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81829 00:19:10.261 18:40:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:10.261 18:40:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80975 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:16.833 Attaching 4 probes... 00:19:16.833 @path[10.0.0.2, 4421]: 17095 00:19:16.833 @path[10.0.0.2, 4421]: 17597 00:19:16.833 @path[10.0.0.2, 4421]: 19123 00:19:16.833 @path[10.0.0.2, 4421]: 17618 00:19:16.833 @path[10.0.0.2, 4421]: 17528 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81829 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81031 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 81031 ']' 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 81031 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81031 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:16.833 killing process with pid 81031 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81031' 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 81031 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 81031 00:19:16.833 Connection closed with partial response: 00:19:16.833 00:19:16.833 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81031 00:19:16.833 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:16.833 [2024-05-16 18:39:31.653011] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:16.833 [2024-05-16 18:39:31.653212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81031 ] 00:19:16.833 [2024-05-16 18:39:31.792364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.833 [2024-05-16 18:39:31.947452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.833 [2024-05-16 18:39:32.023769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:16.833 Running I/O for 90 seconds... 00:19:16.833 [2024-05-16 18:39:41.992485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.992972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.992993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:129608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:129616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:129640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:129648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.833 [2024-05-16 18:39:41.993332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.993967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.993984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.994006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.833 [2024-05-16 18:39:41.994022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.833 [2024-05-16 18:39:41.994043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.994618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:129680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.994979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.994995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:130288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:130296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.834 [2024-05-16 18:39:41.995630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:129736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.995667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.834 [2024-05-16 18:39:41.995705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:16.834 [2024-05-16 18:39:41.995727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:129752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.995743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.995764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:129760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.995780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.995801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.995818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.995851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.995870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.995908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.995938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.995959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.995983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:130312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:130320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:130360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:130368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:130384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:130392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:130400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:130408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:130432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:130448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.835 [2024-05-16 18:39:41.996842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:129800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.996880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:129808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.996945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.996968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.996984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:129824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:129872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:129880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.835 [2024-05-16 18:39:41.997423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:16.835 [2024-05-16 18:39:41.997445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:41.997467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:41.999059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:130472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:130488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:130536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:130568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:41.999687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:41.999703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.601634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.601702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.601771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.601792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.601816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.601847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.601871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.601888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.601910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.601954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.836 [2024-05-16 18:39:48.602400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:16.836 [2024-05-16 18:39:48.602691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.836 [2024-05-16 18:39:48.602707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.602736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.602751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.602773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.602790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.602812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.602850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.602865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.602887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.602944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.602995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.837 [2024-05-16 18:39:48.603657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.603975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:16.837 [2024-05-16 18:39:48.603995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.837 [2024-05-16 18:39:48.604009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.838 [2024-05-16 18:39:48.604714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.604984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:16.838 [2024-05-16 18:39:48.605657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.838 [2024-05-16 18:39:48.605673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.605694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.605710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.605759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.605779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.605802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.605818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.605850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.605865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.605901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.605949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.605970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.605985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.606020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.606056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.606090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.606687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.606702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.839 [2024-05-16 18:39:48.607549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:48.607970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:48.607990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.758625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:55.758707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.758776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:55.758797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.758833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:55.758852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.758873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:55.758888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:55.758949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.758972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:55.758987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.759007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.839 [2024-05-16 18:39:55.759022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:16.839 [2024-05-16 18:39:55.759043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:29184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.759367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.759959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.759974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.840 [2024-05-16 18:39:55.760283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:16.840 [2024-05-16 18:39:55.760556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.840 [2024-05-16 18:39:55.760571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.760883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.760918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.760956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.760975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.760990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.841 [2024-05-16 18:39:55.761742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.761965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.761990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.762006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.762038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.762054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.762078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.762093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:16.841 [2024-05-16 18:39:55.762117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.841 [2024-05-16 18:39:55.762132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.762733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.762790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.762847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.762889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.762928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.762967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.762991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.763006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.763069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.763120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.842 [2024-05-16 18:39:55.763891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.842 [2024-05-16 18:39:55.763947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:16.842 [2024-05-16 18:39:55.763971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:39:55.763990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:39:55.764020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:39:55.764035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:39:55.764059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:39:55.764074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:39:55.764098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:39:55.764113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:39:55.764138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:39:55.764153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:39:55.764181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:39:55.764196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:39:55.764244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:39:55.764261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:53800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:53808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:53816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:53824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:53832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:53840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:53848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:53232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:53240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:53256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:53272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:53184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:53200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.179959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:53864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.179983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.179995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:53872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.180006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:53880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.180039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:53888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.180065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:53896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.180090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:53904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.180114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:53912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.180138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:53920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.843 [2024-05-16 18:40:09.180162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:53288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.180186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.180214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:53304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.180238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.180262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.843 [2024-05-16 18:40:09.180275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.843 [2024-05-16 18:40:09.180286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:53336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:53368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:53384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:53432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:53448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:53456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:53928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:53936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:53944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:53952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:53960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:53968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:53976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:53984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.844 [2024-05-16 18:40:09.180949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.180982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.180995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:53488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:53520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.844 [2024-05-16 18:40:09.181304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.844 [2024-05-16 18:40:09.181316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:53992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:54048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:16.845 [2024-05-16 18:40:09.181541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.845 [2024-05-16 18:40:09.181973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.181986] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fd2c0 is same with the state(5) to be set 00:19:16.845 [2024-05-16 18:40:09.181999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53744 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53752 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53760 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182128] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53768 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53776 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53784 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:53792 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.845 [2024-05-16 18:40:09.182317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54056 len:8 PRP1 0x0 PRP2 0x0 00:19:16.845 [2024-05-16 18:40:09.182327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.845 [2024-05-16 18:40:09.182339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.845 [2024-05-16 18:40:09.182348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54064 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54072 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54080 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54088 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54096 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54104 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182588] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54112 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54120 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54128 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54136 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54144 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54152 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54160 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54168 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54176 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.182968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.182978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.182986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.182994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54184 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.183005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.183015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.183028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.195438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54192 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.195468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.195485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:16.846 [2024-05-16 18:40:09.195495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:16.846 [2024-05-16 18:40:09.195506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54200 len:8 PRP1 0x0 PRP2 0x0 00:19:16.846 [2024-05-16 18:40:09.195518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.195572] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11fd2c0 was disconnected and freed. reset controller. 00:19:16.846 [2024-05-16 18:40:09.195690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.846 [2024-05-16 18:40:09.195713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.195726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.846 [2024-05-16 18:40:09.195738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.195749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.846 [2024-05-16 18:40:09.195761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.195772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:16.846 [2024-05-16 18:40:09.195784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.846 [2024-05-16 18:40:09.195796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:16.846 [2024-05-16 18:40:09.195834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:16.847 [2024-05-16 18:40:09.195868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12005c0 is same with the state(5) to be set 00:19:16.847 [2024-05-16 18:40:09.196776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.847 [2024-05-16 18:40:09.196809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12005c0 (9): Bad file descriptor 00:19:16.847 [2024-05-16 18:40:09.197093] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.847 [2024-05-16 18:40:09.197119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12005c0 with addr=10.0.0.2, port=4421 00:19:16.847 [2024-05-16 18:40:09.197133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12005c0 is same with the state(5) to be set 00:19:16.847 [2024-05-16 18:40:09.197228] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12005c0 (9): Bad file descriptor 00:19:16.847 [2024-05-16 18:40:09.197291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.847 [2024-05-16 18:40:09.197309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:16.847 [2024-05-16 18:40:09.197322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.847 [2024-05-16 18:40:09.197349] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:16.847 [2024-05-16 18:40:09.197363] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.847 [2024-05-16 18:40:19.257326] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:16.847 Received shutdown signal, test time was about 55.634919 seconds 00:19:16.847 00:19:16.847 Latency(us) 00:19:16.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.847 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:16.847 Verification LBA range: start 0x0 length 0x4000 00:19:16.847 Nvme0n1 : 55.63 7069.48 27.62 0.00 0.00 18072.45 1206.46 7046430.72 00:19:16.847 =================================================================================================================== 00:19:16.847 Total : 7069.48 27.62 0.00 0.00 18072.45 1206.46 7046430.72 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.847 rmmod nvme_tcp 00:19:16.847 rmmod nvme_fabrics 00:19:16.847 rmmod nvme_keyring 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 80975 ']' 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 80975 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 80975 ']' 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 80975 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80975 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80975' 00:19:16.847 killing process with pid 80975 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 80975 00:19:16.847 [2024-05-16 18:40:29.961362] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:16.847 18:40:29 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 80975 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.847 18:40:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.107 18:40:30 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:17.107 00:19:17.107 real 1m1.765s 00:19:17.107 user 2m51.272s 00:19:17.107 sys 0m18.650s 00:19:17.107 18:40:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:17.107 18:40:30 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:17.107 ************************************ 00:19:17.107 END TEST nvmf_host_multipath 00:19:17.107 ************************************ 00:19:17.107 18:40:30 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:17.107 18:40:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:17.107 18:40:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:17.107 18:40:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:17.107 ************************************ 00:19:17.107 START TEST nvmf_timeout 00:19:17.107 ************************************ 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:17.107 * Looking for test storage... 00:19:17.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.107 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:17.108 Cannot find device "nvmf_tgt_br" 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:17.108 Cannot find device "nvmf_tgt_br2" 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:17.108 Cannot find device "nvmf_tgt_br" 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:17.108 Cannot find device "nvmf_tgt_br2" 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:17.108 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:17.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:17.368 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:17.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:19:17.368 00:19:17.368 --- 10.0.0.2 ping statistics --- 00:19:17.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.368 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:17.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:17.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:19:17.368 00:19:17.368 --- 10.0.0.3 ping statistics --- 00:19:17.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.368 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:17.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:17.368 00:19:17.368 --- 10.0.0.1 ping statistics --- 00:19:17.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.368 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82146 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82146 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 82146 ']' 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:17.368 18:40:30 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:17.627 [2024-05-16 18:40:30.917286] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:17.627 [2024-05-16 18:40:30.917394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.627 [2024-05-16 18:40:31.059444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:17.885 [2024-05-16 18:40:31.152469] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.885 [2024-05-16 18:40:31.152539] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.885 [2024-05-16 18:40:31.152552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.885 [2024-05-16 18:40:31.152563] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.885 [2024-05-16 18:40:31.152572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.885 [2024-05-16 18:40:31.152887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.885 [2024-05-16 18:40:31.152894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.885 [2024-05-16 18:40:31.212063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.452 18:40:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.711 [2024-05-16 18:40:32.165238] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.711 18:40:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:19.279 Malloc0 00:19:19.279 18:40:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:19.537 18:40:32 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.537 18:40:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.796 [2024-05-16 18:40:33.245020] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:19.796 [2024-05-16 18:40:33.245427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82195 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82195 /var/tmp/bdevperf.sock 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 82195 ']' 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:19.796 18:40:33 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:20.055 [2024-05-16 18:40:33.310448] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:20.055 [2024-05-16 18:40:33.310531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82195 ] 00:19:20.055 [2024-05-16 18:40:33.450009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.055 [2024-05-16 18:40:33.552229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.313 [2024-05-16 18:40:33.614208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:20.880 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:20.880 18:40:34 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:19:20.880 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:21.138 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:21.398 NVMe0n1 00:19:21.398 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82219 00:19:21.398 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.398 18:40:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:21.398 Running I/O for 10 seconds... 00:19:22.335 18:40:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.598 [2024-05-16 18:40:36.045478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045739] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045782] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045880] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045888] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045911] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.045994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046085] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046093] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046137] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046145] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.598 [2024-05-16 18:40:36.046159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046173] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046188] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046195] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046249] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046263] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046288] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c54f0 is same with the state(5) to be set 00:19:22.599 [2024-05-16 18:40:36.046555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.046981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.046990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.599 [2024-05-16 18:40:36.047252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.599 [2024-05-16 18:40:36.047265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.047988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.047996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.600 [2024-05-16 18:40:36.048266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.600 [2024-05-16 18:40:36.048277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.048987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.048995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.601 [2024-05-16 18:40:36.049255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.601 [2024-05-16 18:40:36.049265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.602 [2024-05-16 18:40:36.049289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.602 [2024-05-16 18:40:36.049324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.602 [2024-05-16 18:40:36.049360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.602 [2024-05-16 18:40:36.049380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.602 [2024-05-16 18:40:36.049729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.602 [2024-05-16 18:40:36.049756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049767] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c97f0 is same with the state(5) to be set 00:19:22.602 [2024-05-16 18:40:36.049779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:22.602 [2024-05-16 18:40:36.049787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:22.602 [2024-05-16 18:40:36.049801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:19:22.602 [2024-05-16 18:40:36.049810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.602 [2024-05-16 18:40:36.049897] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18c97f0 was disconnected and freed. reset controller. 00:19:22.602 [2024-05-16 18:40:36.050179] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.602 [2024-05-16 18:40:36.050271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185a6a0 (9): Bad file descriptor 00:19:22.602 [2024-05-16 18:40:36.050383] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.602 [2024-05-16 18:40:36.050404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185a6a0 with addr=10.0.0.2, port=4420 00:19:22.602 [2024-05-16 18:40:36.050414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185a6a0 is same with the state(5) to be set 00:19:22.602 [2024-05-16 18:40:36.050433] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185a6a0 (9): Bad file descriptor 00:19:22.602 [2024-05-16 18:40:36.050449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:22.602 [2024-05-16 18:40:36.050459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:22.602 [2024-05-16 18:40:36.050470] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:22.602 [2024-05-16 18:40:36.050490] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:22.602 [2024-05-16 18:40:36.050501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.602 18:40:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:25.157 [2024-05-16 18:40:38.050720] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.157 [2024-05-16 18:40:38.050789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185a6a0 with addr=10.0.0.2, port=4420 00:19:25.157 [2024-05-16 18:40:38.050805] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185a6a0 is same with the state(5) to be set 00:19:25.157 [2024-05-16 18:40:38.050833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185a6a0 (9): Bad file descriptor 00:19:25.157 [2024-05-16 18:40:38.050876] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.157 [2024-05-16 18:40:38.050888] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:25.157 [2024-05-16 18:40:38.050899] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.157 [2024-05-16 18:40:38.050928] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.157 [2024-05-16 18:40:38.050939] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:25.157 18:40:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:27.060 [2024-05-16 18:40:40.051178] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:27.060 [2024-05-16 18:40:40.051290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185a6a0 with addr=10.0.0.2, port=4420 00:19:27.060 [2024-05-16 18:40:40.051307] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185a6a0 is same with the state(5) to be set 00:19:27.060 [2024-05-16 18:40:40.051336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185a6a0 (9): Bad file descriptor 00:19:27.060 [2024-05-16 18:40:40.051357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:27.060 [2024-05-16 18:40:40.051368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:27.060 [2024-05-16 18:40:40.051380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:27.060 [2024-05-16 18:40:40.051410] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:27.060 [2024-05-16 18:40:40.051423] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.961 [2024-05-16 18:40:42.051571] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:29.895 00:19:29.895 Latency(us) 00:19:29.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.895 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.895 Verification LBA range: start 0x0 length 0x4000 00:19:29.895 NVMe0n1 : 8.17 975.61 3.81 15.66 0.00 128992.01 4587.52 7046430.72 00:19:29.895 =================================================================================================================== 00:19:29.895 Total : 975.61 3.81 15.66 0.00 128992.01 4587.52 7046430.72 00:19:29.895 0 00:19:30.153 18:40:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:30.153 18:40:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:30.153 18:40:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:30.412 18:40:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:30.412 18:40:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:30.412 18:40:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:30.412 18:40:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82219 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82195 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 82195 ']' 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 82195 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82195 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:30.671 killing process with pid 82195 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82195' 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 82195 00:19:30.671 Received shutdown signal, test time was about 9.220784 seconds 00:19:30.671 00:19:30.671 Latency(us) 00:19:30.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.671 =================================================================================================================== 00:19:30.671 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.671 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 82195 00:19:30.930 18:40:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.189 [2024-05-16 18:40:44.687457] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82335 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82335 /var/tmp/bdevperf.sock 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 82335 ']' 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:31.513 18:40:44 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:31.513 [2024-05-16 18:40:44.768559] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:31.513 [2024-05-16 18:40:44.768701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82335 ] 00:19:31.513 [2024-05-16 18:40:44.910409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.787 [2024-05-16 18:40:45.055163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.787 [2024-05-16 18:40:45.132639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:32.354 18:40:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:32.354 18:40:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:19:32.354 18:40:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:32.612 18:40:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:32.870 NVMe0n1 00:19:32.870 18:40:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82365 00:19:32.870 18:40:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.870 18:40:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:32.870 Running I/O for 10 seconds... 00:19:33.806 18:40:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:34.066 [2024-05-16 18:40:47.477923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.066 [2024-05-16 18:40:47.478194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.066 [2024-05-16 18:40:47.478334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.066 [2024-05-16 18:40:47.478469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.066 [2024-05-16 18:40:47.478593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.066 [2024-05-16 18:40:47.478649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.066 [2024-05-16 18:40:47.478789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:34.067 [2024-05-16 18:40:47.478871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.478923] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc6a0 is same with the state(5) to be set 00:19:34.067 [2024-05-16 18:40:47.479354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.067 [2024-05-16 18:40:47.479501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.479642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.479786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.479953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.480805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.480814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.067 [2024-05-16 18:40:47.481878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.067 [2024-05-16 18:40:47.481889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.481899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.481910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.481920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.481931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.481940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.481968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.481977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.481989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.481998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.482969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.482995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.068 [2024-05-16 18:40:47.483213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.068 [2024-05-16 18:40:47.483242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.069 [2024-05-16 18:40:47.483252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.069 [2024-05-16 18:40:47.483275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.069 [2024-05-16 18:40:47.483297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.069 [2024-05-16 18:40:47.483320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.069 [2024-05-16 18:40:47.483343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.069 [2024-05-16 18:40:47.483381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:34.069 [2024-05-16 18:40:47.483743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.069 [2024-05-16 18:40:47.483763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b910 is same with the state(5) to be set 00:19:34.069 [2024-05-16 18:40:47.483788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:34.069 [2024-05-16 18:40:47.483797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:34.069 [2024-05-16 18:40:47.483810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67736 len:8 PRP1 0x0 PRP2 0x0 00:19:34.069 [2024-05-16 18:40:47.483819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.069 [2024-05-16 18:40:47.483884] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c3b910 was disconnected and freed. reset controller. 00:19:34.069 [2024-05-16 18:40:47.484154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:34.069 [2024-05-16 18:40:47.484180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:34.069 [2024-05-16 18:40:47.484284] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:34.069 [2024-05-16 18:40:47.484323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcc6a0 with addr=10.0.0.2, port=4420 00:19:34.069 [2024-05-16 18:40:47.484334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc6a0 is same with the state(5) to be set 00:19:34.069 [2024-05-16 18:40:47.484362] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:34.069 [2024-05-16 18:40:47.484378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:34.069 [2024-05-16 18:40:47.484388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:34.069 [2024-05-16 18:40:47.484399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:34.069 [2024-05-16 18:40:47.484418] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:34.069 [2024-05-16 18:40:47.484429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:34.069 18:40:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:35.003 [2024-05-16 18:40:48.484626] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.003 [2024-05-16 18:40:48.484954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcc6a0 with addr=10.0.0.2, port=4420 00:19:35.003 [2024-05-16 18:40:48.485118] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc6a0 is same with the state(5) to be set 00:19:35.003 [2024-05-16 18:40:48.485289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:35.003 [2024-05-16 18:40:48.485456] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.003 [2024-05-16 18:40:48.485591] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.003 [2024-05-16 18:40:48.485718] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.003 [2024-05-16 18:40:48.485789] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.003 [2024-05-16 18:40:48.485907] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.261 18:40:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.261 [2024-05-16 18:40:48.752185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.520 18:40:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82365 00:19:36.087 [2024-05-16 18:40:49.504072] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:44.221 00:19:44.221 Latency(us) 00:19:44.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.221 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:44.221 Verification LBA range: start 0x0 length 0x4000 00:19:44.221 NVMe0n1 : 10.01 6577.71 25.69 0.00 0.00 19413.45 1623.51 3019898.88 00:19:44.221 =================================================================================================================== 00:19:44.221 Total : 6577.71 25.69 0.00 0.00 19413.45 1623.51 3019898.88 00:19:44.221 0 00:19:44.221 18:40:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82471 00:19:44.221 18:40:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.221 18:40:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:44.221 Running I/O for 10 seconds... 00:19:44.221 18:40:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.221 [2024-05-16 18:40:57.648532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c47f0 is same with the state(5) to be set 00:19:44.221 [2024-05-16 18:40:57.648607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c47f0 is same with the state(5) to be set 00:19:44.221 [2024-05-16 18:40:57.648623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c47f0 is same with the state(5) to be set 00:19:44.221 [2024-05-16 18:40:57.648634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c47f0 is same with the state(5) to be set 00:19:44.221 [2024-05-16 18:40:57.648646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c47f0 is same with the state(5) to be set 00:19:44.221 [2024-05-16 18:40:57.648968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.221 [2024-05-16 18:40:57.649525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.221 [2024-05-16 18:40:57.649667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.221 [2024-05-16 18:40:57.649677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.649987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.649997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.222 [2024-05-16 18:40:57.650237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.222 [2024-05-16 18:40:57.650502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.222 [2024-05-16 18:40:57.650514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.650763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.650984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.650995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:44.223 [2024-05-16 18:40:57.651243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.223 [2024-05-16 18:40:57.651442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.223 [2024-05-16 18:40:57.651453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.224 [2024-05-16 18:40:57.651464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.224 [2024-05-16 18:40:57.651485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.224 [2024-05-16 18:40:57.651507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.224 [2024-05-16 18:40:57.651529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.224 [2024-05-16 18:40:57.651552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:44.224 [2024-05-16 18:40:57.651576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c1b900 is same with the state(5) to be set 00:19:44.224 [2024-05-16 18:40:57.651600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69760 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651638] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70312 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70320 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70328 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70336 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651779] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70344 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70352 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70360 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70368 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70376 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.651968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.651975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.651983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70384 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.651992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.652002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.652010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.652018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70392 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.652027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.652036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.652043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.652051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70400 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.652060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.652069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.652076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70408 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.652092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.652101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:44.224 [2024-05-16 18:40:57.652109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:44.224 [2024-05-16 18:40:57.652118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70416 len:8 PRP1 0x0 PRP2 0x0 00:19:44.224 [2024-05-16 18:40:57.652127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.224 [2024-05-16 18:40:57.652193] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c1b900 was disconnected and freed. reset controller. 00:19:44.225 [2024-05-16 18:40:57.652454] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.225 [2024-05-16 18:40:57.652548] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:44.225 [2024-05-16 18:40:57.652669] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.225 [2024-05-16 18:40:57.652692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcc6a0 with addr=10.0.0.2, port=4420 00:19:44.225 [2024-05-16 18:40:57.652704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc6a0 is same with the state(5) to be set 00:19:44.225 [2024-05-16 18:40:57.652723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:44.225 [2024-05-16 18:40:57.652752] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.225 [2024-05-16 18:40:57.652763] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:44.225 [2024-05-16 18:40:57.652774] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.225 [2024-05-16 18:40:57.652795] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.225 [2024-05-16 18:40:57.652808] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.225 18:40:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:45.161 [2024-05-16 18:40:58.652973] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:45.161 [2024-05-16 18:40:58.653258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcc6a0 with addr=10.0.0.2, port=4420 00:19:45.161 [2024-05-16 18:40:58.653412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc6a0 is same with the state(5) to be set 00:19:45.161 [2024-05-16 18:40:58.653578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:45.161 [2024-05-16 18:40:58.653724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:45.161 [2024-05-16 18:40:58.653792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:45.161 [2024-05-16 18:40:58.653937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:45.162 [2024-05-16 18:40:58.654004] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:45.162 [2024-05-16 18:40:58.654101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.536 [2024-05-16 18:40:59.654402] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.536 [2024-05-16 18:40:59.654746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcc6a0 with addr=10.0.0.2, port=4420 00:19:46.536 [2024-05-16 18:40:59.654919] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc6a0 is same with the state(5) to be set 00:19:46.536 [2024-05-16 18:40:59.655216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:46.536 [2024-05-16 18:40:59.655379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.536 [2024-05-16 18:40:59.655504] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:46.536 [2024-05-16 18:40:59.655570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.536 [2024-05-16 18:40:59.655700] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.536 [2024-05-16 18:40:59.655753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.473 [2024-05-16 18:41:00.657379] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.473 [2024-05-16 18:41:00.657650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcc6a0 with addr=10.0.0.2, port=4420 00:19:47.473 [2024-05-16 18:41:00.657800] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc6a0 is same with the state(5) to be set 00:19:47.473 [2024-05-16 18:41:00.658292] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc6a0 (9): Bad file descriptor 00:19:47.473 [2024-05-16 18:41:00.658727] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:47.473 [2024-05-16 18:41:00.658897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:47.473 [2024-05-16 18:41:00.659147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:47.473 [2024-05-16 18:41:00.663274] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:47.473 [2024-05-16 18:41:00.663310] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.473 18:41:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.473 [2024-05-16 18:41:00.897266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.473 18:41:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82471 00:19:48.409 [2024-05-16 18:41:01.696048] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:53.715 00:19:53.715 Latency(us) 00:19:53.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.715 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.715 Verification LBA range: start 0x0 length 0x4000 00:19:53.715 NVMe0n1 : 10.01 6108.08 23.86 3986.46 0.00 12653.18 647.91 3019898.88 00:19:53.715 =================================================================================================================== 00:19:53.715 Total : 6108.08 23.86 3986.46 0.00 12653.18 0.00 3019898.88 00:19:53.715 0 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82335 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 82335 ']' 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 82335 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82335 00:19:53.715 killing process with pid 82335 00:19:53.715 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.715 00:19:53.715 Latency(us) 00:19:53.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.715 =================================================================================================================== 00:19:53.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82335' 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 82335 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 82335 00:19:53.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82580 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82580 /var/tmp/bdevperf.sock 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 82580 ']' 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:53.715 18:41:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:53.715 [2024-05-16 18:41:06.886794] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:19:53.715 [2024-05-16 18:41:06.887115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82580 ] 00:19:53.715 [2024-05-16 18:41:07.021277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.715 [2024-05-16 18:41:07.149369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.974 [2024-05-16 18:41:07.220720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:54.542 18:41:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:54.542 18:41:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:19:54.542 18:41:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82596 00:19:54.542 18:41:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82580 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:54.542 18:41:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:54.801 18:41:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:55.060 NVMe0n1 00:19:55.060 18:41:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82642 00:19:55.060 18:41:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.060 18:41:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:55.060 Running I/O for 10 seconds... 00:19:55.996 18:41:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.260 [2024-05-16 18:41:09.674594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674670] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674686] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674710] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674808] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674890] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674898] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674906] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.674995] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675035] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675059] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675083] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.260 [2024-05-16 18:41:09.675149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675157] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675165] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675455] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675534] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675643] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675697] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675775] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19d0110 is same with the state(5) to be set 00:19:56.261 [2024-05-16 18:41:09.675928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.261 [2024-05-16 18:41:09.675988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.261 [2024-05-16 18:41:09.676014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.261 [2024-05-16 18:41:09.676025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.261 [2024-05-16 18:41:09.676037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.261 [2024-05-16 18:41:09.676048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.261 [2024-05-16 18:41:09.676059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.261 [2024-05-16 18:41:09.676069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.261 [2024-05-16 18:41:09.676080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.261 [2024-05-16 18:41:09.676090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.261 [2024-05-16 18:41:09.676101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.261 [2024-05-16 18:41:09.676113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.261 [2024-05-16 18:41:09.676124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.676982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.676992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.677001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.677012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.677021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.677032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.262 [2024-05-16 18:41:09.677041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.262 [2024-05-16 18:41:09.677052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:48464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.677985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.263 [2024-05-16 18:41:09.677994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.263 [2024-05-16 18:41:09.678003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.264 [2024-05-16 18:41:09.678769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.264 [2024-05-16 18:41:09.678779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.265 [2024-05-16 18:41:09.678788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.265 [2024-05-16 18:41:09.678799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.265 [2024-05-16 18:41:09.678807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.265 [2024-05-16 18:41:09.678818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.265 [2024-05-16 18:41:09.678827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.265 [2024-05-16 18:41:09.678837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.265 [2024-05-16 18:41:09.678845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.265 [2024-05-16 18:41:09.678856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.265 [2024-05-16 18:41:09.678865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.265 [2024-05-16 18:41:09.678875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:56.265 [2024-05-16 18:41:09.678883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.265 [2024-05-16 18:41:09.678893] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a9640 is same with the state(5) to be set 00:19:56.265 [2024-05-16 18:41:09.678920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:56.265 [2024-05-16 18:41:09.678928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:56.265 [2024-05-16 18:41:09.678936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114360 len:8 PRP1 0x0 PRP2 0x0 00:19:56.265 [2024-05-16 18:41:09.678945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.265 [2024-05-16 18:41:09.679008] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21a9640 was disconnected and freed. reset controller. 00:19:56.265 [2024-05-16 18:41:09.679324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.265 [2024-05-16 18:41:09.679418] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213b820 (9): Bad file descriptor 00:19:56.265 [2024-05-16 18:41:09.679537] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.265 [2024-05-16 18:41:09.679558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213b820 with addr=10.0.0.2, port=4420 00:19:56.265 [2024-05-16 18:41:09.679584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b820 is same with the state(5) to be set 00:19:56.265 [2024-05-16 18:41:09.679618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213b820 (9): Bad file descriptor 00:19:56.265 [2024-05-16 18:41:09.679633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.265 [2024-05-16 18:41:09.679642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.265 [2024-05-16 18:41:09.679653] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.265 [2024-05-16 18:41:09.679672] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.265 [2024-05-16 18:41:09.679682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.265 18:41:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82642 00:19:58.798 [2024-05-16 18:41:11.679936] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:58.798 [2024-05-16 18:41:11.680004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213b820 with addr=10.0.0.2, port=4420 00:19:58.798 [2024-05-16 18:41:11.680021] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b820 is same with the state(5) to be set 00:19:58.798 [2024-05-16 18:41:11.680048] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213b820 (9): Bad file descriptor 00:19:58.798 [2024-05-16 18:41:11.680079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.798 [2024-05-16 18:41:11.680090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:58.799 [2024-05-16 18:41:11.680101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.799 [2024-05-16 18:41:11.680129] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:58.799 [2024-05-16 18:41:11.680140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.705 [2024-05-16 18:41:13.680446] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.705 [2024-05-16 18:41:13.680505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213b820 with addr=10.0.0.2, port=4420 00:20:00.705 [2024-05-16 18:41:13.680521] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213b820 is same with the state(5) to be set 00:20:00.705 [2024-05-16 18:41:13.680564] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213b820 (9): Bad file descriptor 00:20:00.705 [2024-05-16 18:41:13.680584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.705 [2024-05-16 18:41:13.680594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:00.705 [2024-05-16 18:41:13.680605] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.705 [2024-05-16 18:41:13.680634] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.705 [2024-05-16 18:41:13.680645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:02.609 [2024-05-16 18:41:15.680863] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:03.548 00:20:03.548 Latency(us) 00:20:03.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.548 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:03.548 NVMe0n1 : 8.15 2185.78 8.54 15.71 0.00 58036.42 1452.22 7015926.69 00:20:03.548 =================================================================================================================== 00:20:03.548 Total : 2185.78 8.54 15.71 0.00 58036.42 1452.22 7015926.69 00:20:03.548 0 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:03.548 Attaching 5 probes... 00:20:03.548 1294.691069: reset bdev controller NVMe0 00:20:03.548 1294.839547: reconnect bdev controller NVMe0 00:20:03.548 3295.157386: reconnect delay bdev controller NVMe0 00:20:03.548 3295.193804: reconnect bdev controller NVMe0 00:20:03.548 5295.662895: reconnect delay bdev controller NVMe0 00:20:03.548 5295.699538: reconnect bdev controller NVMe0 00:20:03.548 7296.176346: reconnect delay bdev controller NVMe0 00:20:03.548 7296.213000: reconnect bdev controller NVMe0 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82596 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82580 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 82580 ']' 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 82580 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82580 00:20:03.548 killing process with pid 82580 00:20:03.548 Received shutdown signal, test time was about 8.208606 seconds 00:20:03.548 00:20:03.548 Latency(us) 00:20:03.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.548 =================================================================================================================== 00:20:03.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82580' 00:20:03.548 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 82580 00:20:03.549 18:41:16 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 82580 00:20:03.549 18:41:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.116 rmmod nvme_tcp 00:20:04.116 rmmod nvme_fabrics 00:20:04.116 rmmod nvme_keyring 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82146 ']' 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82146 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 82146 ']' 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 82146 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82146 00:20:04.116 killing process with pid 82146 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82146' 00:20:04.116 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 82146 00:20:04.117 [2024-05-16 18:41:17.420228] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:04.117 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 82146 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:04.376 00:20:04.376 real 0m47.413s 00:20:04.376 user 2m18.735s 00:20:04.376 sys 0m5.920s 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:04.376 18:41:17 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:04.376 ************************************ 00:20:04.376 END TEST nvmf_timeout 00:20:04.376 ************************************ 00:20:04.376 18:41:17 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:20:04.376 18:41:17 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:20:04.376 18:41:17 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.376 18:41:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.635 18:41:17 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:20:04.635 00:20:04.635 real 12m28.424s 00:20:04.635 user 30m15.183s 00:20:04.635 sys 3m5.231s 00:20:04.635 18:41:17 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:04.635 ************************************ 00:20:04.635 END TEST nvmf_tcp 00:20:04.635 ************************************ 00:20:04.635 18:41:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:04.635 18:41:17 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:04.635 18:41:17 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:04.635 18:41:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:04.635 18:41:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:04.635 18:41:17 -- common/autotest_common.sh@10 -- # set +x 00:20:04.635 ************************************ 00:20:04.635 START TEST nvmf_dif 00:20:04.635 ************************************ 00:20:04.635 18:41:17 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:04.635 * Looking for test storage... 00:20:04.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:04.635 18:41:18 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.635 18:41:18 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.635 18:41:18 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.635 18:41:18 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.635 18:41:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.635 18:41:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.635 18:41:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.635 18:41:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:04.635 18:41:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.635 18:41:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:04.635 18:41:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:04.635 18:41:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:04.635 18:41:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:04.635 18:41:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.635 18:41:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:04.635 18:41:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:04.635 Cannot find device "nvmf_tgt_br" 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.635 Cannot find device "nvmf_tgt_br2" 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:04.635 Cannot find device "nvmf_tgt_br" 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:04.635 Cannot find device "nvmf_tgt_br2" 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:04.635 18:41:18 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.894 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:04.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:20:04.894 00:20:04.894 --- 10.0.0.2 ping statistics --- 00:20:04.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.894 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:04.894 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.894 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:20:04.894 00:20:04.894 --- 10.0.0.3 ping statistics --- 00:20:04.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.894 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:04.894 00:20:04.894 --- 10.0.0.1 ping statistics --- 00:20:04.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.894 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:04.894 18:41:18 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:05.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:05.413 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:05.413 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:05.413 18:41:18 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:05.413 18:41:18 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83076 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:05.413 18:41:18 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83076 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 83076 ']' 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:05.413 18:41:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:05.413 [2024-05-16 18:41:18.806901] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:20:05.413 [2024-05-16 18:41:18.807013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.672 [2024-05-16 18:41:18.947362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.672 [2024-05-16 18:41:19.040761] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.672 [2024-05-16 18:41:19.040856] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.672 [2024-05-16 18:41:19.040867] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.672 [2024-05-16 18:41:19.040875] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.672 [2024-05-16 18:41:19.040881] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.672 [2024-05-16 18:41:19.040910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.672 [2024-05-16 18:41:19.112762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:06.239 18:41:19 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:06.239 18:41:19 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:20:06.239 18:41:19 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:06.239 18:41:19 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.239 18:41:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 18:41:19 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.498 18:41:19 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:06.498 18:41:19 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:06.498 18:41:19 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.498 18:41:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 [2024-05-16 18:41:19.762596] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.498 18:41:19 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.498 18:41:19 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:06.498 18:41:19 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:06.498 18:41:19 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:06.498 18:41:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 ************************************ 00:20:06.498 START TEST fio_dif_1_default 00:20:06.498 ************************************ 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 bdev_null0 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:06.498 [2024-05-16 18:41:19.806558] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:06.498 [2024-05-16 18:41:19.806800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.498 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:06.499 { 00:20:06.499 "params": { 00:20:06.499 "name": "Nvme$subsystem", 00:20:06.499 "trtype": "$TEST_TRANSPORT", 00:20:06.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.499 "adrfam": "ipv4", 00:20:06.499 "trsvcid": "$NVMF_PORT", 00:20:06.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.499 "hdgst": ${hdgst:-false}, 00:20:06.499 "ddgst": ${ddgst:-false} 00:20:06.499 }, 00:20:06.499 "method": "bdev_nvme_attach_controller" 00:20:06.499 } 00:20:06.499 EOF 00:20:06.499 )") 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:06.499 "params": { 00:20:06.499 "name": "Nvme0", 00:20:06.499 "trtype": "tcp", 00:20:06.499 "traddr": "10.0.0.2", 00:20:06.499 "adrfam": "ipv4", 00:20:06.499 "trsvcid": "4420", 00:20:06.499 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:06.499 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:06.499 "hdgst": false, 00:20:06.499 "ddgst": false 00:20:06.499 }, 00:20:06.499 "method": "bdev_nvme_attach_controller" 00:20:06.499 }' 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:06.499 18:41:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:06.758 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:06.758 fio-3.35 00:20:06.758 Starting 1 thread 00:20:19.013 00:20:19.013 filename0: (groupid=0, jobs=1): err= 0: pid=83137: Thu May 16 18:41:30 2024 00:20:19.013 read: IOPS=9571, BW=37.4MiB/s (39.2MB/s)(374MiB/10001msec) 00:20:19.013 slat (nsec): min=5875, max=70530, avg=7823.16, stdev=3438.19 00:20:19.013 clat (usec): min=314, max=3913, avg=394.81, stdev=48.23 00:20:19.013 lat (usec): min=320, max=3941, avg=402.63, stdev=49.02 00:20:19.013 clat percentiles (usec): 00:20:19.013 | 1.00th=[ 322], 5.00th=[ 338], 10.00th=[ 347], 20.00th=[ 359], 00:20:19.013 | 30.00th=[ 371], 40.00th=[ 379], 50.00th=[ 388], 60.00th=[ 400], 00:20:19.013 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 449], 95.00th=[ 474], 00:20:19.013 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 611], 00:20:19.013 | 99.99th=[ 906] 00:20:19.013 bw ( KiB/s): min=36096, max=39936, per=100.00%, avg=38297.26, stdev=901.67, samples=19 00:20:19.013 iops : min= 9024, max= 9984, avg=9574.32, stdev=225.42, samples=19 00:20:19.013 lat (usec) : 500=98.34%, 750=1.63%, 1000=0.02% 00:20:19.013 lat (msec) : 4=0.01% 00:20:19.013 cpu : usr=84.95%, sys=13.25%, ctx=27, majf=0, minf=0 00:20:19.013 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.013 issued rwts: total=95720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.013 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:19.013 00:20:19.013 Run status group 0 (all jobs): 00:20:19.013 READ: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=374MiB (392MB), run=10001-10001msec 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.013 00:20:19.013 real 0m11.074s 00:20:19.013 user 0m9.183s 00:20:19.013 sys 0m1.621s 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:19.013 18:41:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:19.013 ************************************ 00:20:19.013 END TEST fio_dif_1_default 00:20:19.013 ************************************ 00:20:19.013 18:41:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:19.013 18:41:30 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:19.013 18:41:30 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 ************************************ 00:20:19.014 START TEST fio_dif_1_multi_subsystems 00:20:19.014 ************************************ 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 bdev_null0 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 [2024-05-16 18:41:30.935675] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 bdev_null1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.014 { 00:20:19.014 "params": { 00:20:19.014 "name": "Nvme$subsystem", 00:20:19.014 "trtype": "$TEST_TRANSPORT", 00:20:19.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.014 "adrfam": "ipv4", 00:20:19.014 "trsvcid": "$NVMF_PORT", 00:20:19.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.014 "hdgst": ${hdgst:-false}, 00:20:19.014 "ddgst": ${ddgst:-false} 00:20:19.014 }, 00:20:19.014 "method": "bdev_nvme_attach_controller" 00:20:19.014 } 00:20:19.014 EOF 00:20:19.014 )") 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.014 { 00:20:19.014 "params": { 00:20:19.014 "name": "Nvme$subsystem", 00:20:19.014 "trtype": "$TEST_TRANSPORT", 00:20:19.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.014 "adrfam": "ipv4", 00:20:19.014 "trsvcid": "$NVMF_PORT", 00:20:19.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.014 "hdgst": ${hdgst:-false}, 00:20:19.014 "ddgst": ${ddgst:-false} 00:20:19.014 }, 00:20:19.014 "method": "bdev_nvme_attach_controller" 00:20:19.014 } 00:20:19.014 EOF 00:20:19.014 )") 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:19.014 18:41:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.014 "params": { 00:20:19.014 "name": "Nvme0", 00:20:19.014 "trtype": "tcp", 00:20:19.014 "traddr": "10.0.0.2", 00:20:19.014 "adrfam": "ipv4", 00:20:19.014 "trsvcid": "4420", 00:20:19.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.014 "hdgst": false, 00:20:19.014 "ddgst": false 00:20:19.014 }, 00:20:19.014 "method": "bdev_nvme_attach_controller" 00:20:19.014 },{ 00:20:19.014 "params": { 00:20:19.014 "name": "Nvme1", 00:20:19.014 "trtype": "tcp", 00:20:19.014 "traddr": "10.0.0.2", 00:20:19.014 "adrfam": "ipv4", 00:20:19.014 "trsvcid": "4420", 00:20:19.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.014 "hdgst": false, 00:20:19.014 "ddgst": false 00:20:19.014 }, 00:20:19.014 "method": "bdev_nvme_attach_controller" 00:20:19.014 }' 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:19.014 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:19.015 18:41:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.015 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:19.015 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:19.015 fio-3.35 00:20:19.015 Starting 2 threads 00:20:29.018 00:20:29.018 filename0: (groupid=0, jobs=1): err= 0: pid=83297: Thu May 16 18:41:41 2024 00:20:29.018 read: IOPS=5121, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:20:29.018 slat (nsec): min=6382, max=88477, avg=13256.57, stdev=4892.24 00:20:29.018 clat (usec): min=532, max=1182, avg=746.04, stdev=70.22 00:20:29.018 lat (usec): min=569, max=1208, avg=759.30, stdev=71.11 00:20:29.018 clat percentiles (usec): 00:20:29.018 | 1.00th=[ 627], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:20:29.018 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:20:29.018 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 848], 95.00th=[ 881], 00:20:29.018 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1004], 99.95th=[ 1029], 00:20:29.018 | 99.99th=[ 1074] 00:20:29.018 bw ( KiB/s): min=18400, max=21568, per=50.30%, avg=20609.68, stdev=771.66, samples=19 00:20:29.018 iops : min= 4600, max= 5392, avg=5152.42, stdev=192.91, samples=19 00:20:29.018 lat (usec) : 750=58.98%, 1000=40.89% 00:20:29.018 lat (msec) : 2=0.12% 00:20:29.018 cpu : usr=90.27%, sys=8.34%, ctx=73, majf=0, minf=0 00:20:29.018 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.018 issued rwts: total=51220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.018 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:29.018 filename1: (groupid=0, jobs=1): err= 0: pid=83298: Thu May 16 18:41:41 2024 00:20:29.018 read: IOPS=5121, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:20:29.018 slat (usec): min=6, max=105, avg=13.76, stdev= 5.09 00:20:29.018 clat (usec): min=603, max=1292, avg=742.98, stdev=68.18 00:20:29.018 lat (usec): min=612, max=1336, avg=756.75, stdev=69.30 00:20:29.018 clat percentiles (usec): 00:20:29.018 | 1.00th=[ 635], 5.00th=[ 652], 10.00th=[ 668], 20.00th=[ 685], 00:20:29.018 | 30.00th=[ 701], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:20:29.018 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 873], 00:20:29.018 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 996], 99.95th=[ 1012], 00:20:29.018 | 99.99th=[ 1045] 00:20:29.018 bw ( KiB/s): min=18400, max=21568, per=50.30%, avg=20609.68, stdev=771.66, samples=19 00:20:29.018 iops : min= 4600, max= 5392, avg=5152.42, stdev=192.91, samples=19 00:20:29.018 lat (usec) : 750=60.68%, 1000=39.23% 00:20:29.018 lat (msec) : 2=0.09% 00:20:29.018 cpu : usr=90.02%, sys=8.49%, ctx=169, majf=0, minf=9 00:20:29.018 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.018 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.018 issued rwts: total=51220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.018 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:29.018 00:20:29.018 Run status group 0 (all jobs): 00:20:29.018 READ: bw=40.0MiB/s (42.0MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=400MiB (420MB), run=10001-10001msec 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 00:20:29.018 real 0m11.205s 00:20:29.018 user 0m18.842s 00:20:29.018 sys 0m1.987s 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:29.018 ************************************ 00:20:29.018 18:41:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 END TEST fio_dif_1_multi_subsystems 00:20:29.018 ************************************ 00:20:29.018 18:41:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:29.018 18:41:42 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:29.018 18:41:42 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 ************************************ 00:20:29.018 START TEST fio_dif_rand_params 00:20:29.018 ************************************ 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 bdev_null0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:29.018 [2024-05-16 18:41:42.198171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:29.018 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:29.019 { 00:20:29.019 "params": { 00:20:29.019 "name": "Nvme$subsystem", 00:20:29.019 "trtype": "$TEST_TRANSPORT", 00:20:29.019 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.019 "adrfam": "ipv4", 00:20:29.019 "trsvcid": "$NVMF_PORT", 00:20:29.019 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.019 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.019 "hdgst": ${hdgst:-false}, 00:20:29.019 "ddgst": ${ddgst:-false} 00:20:29.019 }, 00:20:29.019 "method": "bdev_nvme_attach_controller" 00:20:29.019 } 00:20:29.019 EOF 00:20:29.019 )") 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:29.019 "params": { 00:20:29.019 "name": "Nvme0", 00:20:29.019 "trtype": "tcp", 00:20:29.019 "traddr": "10.0.0.2", 00:20:29.019 "adrfam": "ipv4", 00:20:29.019 "trsvcid": "4420", 00:20:29.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:29.019 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:29.019 "hdgst": false, 00:20:29.019 "ddgst": false 00:20:29.019 }, 00:20:29.019 "method": "bdev_nvme_attach_controller" 00:20:29.019 }' 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:29.019 18:41:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.019 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:29.019 ... 00:20:29.019 fio-3.35 00:20:29.019 Starting 3 threads 00:20:35.579 00:20:35.579 filename0: (groupid=0, jobs=1): err= 0: pid=83455: Thu May 16 18:41:47 2024 00:20:35.579 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5001msec) 00:20:35.579 slat (nsec): min=7005, max=52396, avg=15459.15, stdev=5147.10 00:20:35.579 clat (usec): min=10330, max=13870, avg=11127.31, stdev=521.22 00:20:35.579 lat (usec): min=10343, max=13895, avg=11142.77, stdev=521.92 00:20:35.579 clat percentiles (usec): 00:20:35.579 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:20:35.579 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:20:35.579 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11994], 95.00th=[12125], 00:20:35.579 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13829], 99.95th=[13829], 00:20:35.579 | 99.99th=[13829] 00:20:35.579 bw ( KiB/s): min=31488, max=35328, per=33.32%, avg=34389.33, stdev=1200.75, samples=9 00:20:35.579 iops : min= 246, max= 276, avg=268.67, stdev= 9.38, samples=9 00:20:35.579 lat (msec) : 20=100.00% 00:20:35.579 cpu : usr=91.14%, sys=8.32%, ctx=7, majf=0, minf=9 00:20:35.579 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.579 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:35.579 filename0: (groupid=0, jobs=1): err= 0: pid=83456: Thu May 16 18:41:47 2024 00:20:35.579 read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(168MiB/5004msec) 00:20:35.579 slat (nsec): min=6476, max=50678, avg=14442.90, stdev=5653.63 00:20:35.579 clat (usec): min=10322, max=16869, avg=11136.72, stdev=571.58 00:20:35.579 lat (usec): min=10334, max=16893, avg=11151.17, stdev=572.37 00:20:35.579 clat percentiles (usec): 00:20:35.579 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:20:35.579 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11207], 00:20:35.579 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11994], 95.00th=[12125], 00:20:35.579 | 99.00th=[12518], 99.50th=[12649], 99.90th=[16909], 99.95th=[16909], 00:20:35.579 | 99.99th=[16909] 00:20:35.579 bw ( KiB/s): min=30720, max=35328, per=33.24%, avg=34304.00, stdev=1436.80, samples=9 00:20:35.579 iops : min= 240, max= 276, avg=268.00, stdev=11.22, samples=9 00:20:35.579 lat (msec) : 20=100.00% 00:20:35.579 cpu : usr=91.07%, sys=8.37%, ctx=4, majf=0, minf=9 00:20:35.579 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.579 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:35.579 filename0: (groupid=0, jobs=1): err= 0: pid=83457: Thu May 16 18:41:47 2024 00:20:35.579 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(168MiB/5005msec) 00:20:35.579 slat (nsec): min=6920, max=47371, avg=15648.97, stdev=5279.51 00:20:35.579 clat (usec): min=4656, max=13439, avg=11111.17, stdev=598.08 00:20:35.579 lat (usec): min=4664, max=13461, avg=11126.82, stdev=598.82 00:20:35.579 clat percentiles (usec): 00:20:35.579 | 1.00th=[10421], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:20:35.579 | 30.00th=[10814], 40.00th=[10945], 50.00th=[10945], 60.00th=[11076], 00:20:35.579 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11994], 95.00th=[12125], 00:20:35.579 | 99.00th=[12518], 99.50th=[12649], 99.90th=[13435], 99.95th=[13435], 00:20:35.579 | 99.99th=[13435] 00:20:35.579 bw ( KiB/s): min=31551, max=35328, per=33.33%, avg=34396.33, stdev=1181.75, samples=9 00:20:35.579 iops : min= 246, max= 276, avg=268.67, stdev= 9.38, samples=9 00:20:35.579 lat (msec) : 10=0.22%, 20=99.78% 00:20:35.579 cpu : usr=91.05%, sys=8.31%, ctx=69, majf=0, minf=9 00:20:35.579 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:35.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:35.579 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:35.579 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:35.579 00:20:35.579 Run status group 0 (all jobs): 00:20:35.579 READ: bw=101MiB/s (106MB/s), 33.6MiB/s-33.6MiB/s (35.2MB/s-35.3MB/s), io=504MiB (529MB), run=5001-5005msec 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:35.579 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 bdev_null0 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 [2024-05-16 18:41:48.319150] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 bdev_null1 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 bdev_null2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:35.580 { 00:20:35.580 "params": { 00:20:35.580 "name": "Nvme$subsystem", 00:20:35.580 "trtype": "$TEST_TRANSPORT", 00:20:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.580 "adrfam": "ipv4", 00:20:35.580 "trsvcid": "$NVMF_PORT", 00:20:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.580 "hdgst": ${hdgst:-false}, 00:20:35.580 "ddgst": ${ddgst:-false} 00:20:35.580 }, 00:20:35.580 "method": "bdev_nvme_attach_controller" 00:20:35.580 } 00:20:35.580 EOF 00:20:35.580 )") 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:35.580 { 00:20:35.580 "params": { 00:20:35.580 "name": "Nvme$subsystem", 00:20:35.580 "trtype": "$TEST_TRANSPORT", 00:20:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.580 "adrfam": "ipv4", 00:20:35.580 "trsvcid": "$NVMF_PORT", 00:20:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.580 "hdgst": ${hdgst:-false}, 00:20:35.580 "ddgst": ${ddgst:-false} 00:20:35.580 }, 00:20:35.580 "method": "bdev_nvme_attach_controller" 00:20:35.580 } 00:20:35.580 EOF 00:20:35.580 )") 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:35.580 { 00:20:35.580 "params": { 00:20:35.580 "name": "Nvme$subsystem", 00:20:35.580 "trtype": "$TEST_TRANSPORT", 00:20:35.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.580 "adrfam": "ipv4", 00:20:35.580 "trsvcid": "$NVMF_PORT", 00:20:35.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.580 "hdgst": ${hdgst:-false}, 00:20:35.580 "ddgst": ${ddgst:-false} 00:20:35.580 }, 00:20:35.580 "method": "bdev_nvme_attach_controller" 00:20:35.580 } 00:20:35.580 EOF 00:20:35.580 )") 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:35.580 18:41:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:35.580 "params": { 00:20:35.580 "name": "Nvme0", 00:20:35.580 "trtype": "tcp", 00:20:35.580 "traddr": "10.0.0.2", 00:20:35.581 "adrfam": "ipv4", 00:20:35.581 "trsvcid": "4420", 00:20:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:35.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:35.581 "hdgst": false, 00:20:35.581 "ddgst": false 00:20:35.581 }, 00:20:35.581 "method": "bdev_nvme_attach_controller" 00:20:35.581 },{ 00:20:35.581 "params": { 00:20:35.581 "name": "Nvme1", 00:20:35.581 "trtype": "tcp", 00:20:35.581 "traddr": "10.0.0.2", 00:20:35.581 "adrfam": "ipv4", 00:20:35.581 "trsvcid": "4420", 00:20:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.581 "hdgst": false, 00:20:35.581 "ddgst": false 00:20:35.581 }, 00:20:35.581 "method": "bdev_nvme_attach_controller" 00:20:35.581 },{ 00:20:35.581 "params": { 00:20:35.581 "name": "Nvme2", 00:20:35.581 "trtype": "tcp", 00:20:35.581 "traddr": "10.0.0.2", 00:20:35.581 "adrfam": "ipv4", 00:20:35.581 "trsvcid": "4420", 00:20:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.581 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.581 "hdgst": false, 00:20:35.581 "ddgst": false 00:20:35.581 }, 00:20:35.581 "method": "bdev_nvme_attach_controller" 00:20:35.581 }' 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:35.581 18:41:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:35.581 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:35.581 ... 00:20:35.581 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:35.581 ... 00:20:35.581 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:35.581 ... 00:20:35.581 fio-3.35 00:20:35.581 Starting 24 threads 00:20:47.782 00:20:47.782 filename0: (groupid=0, jobs=1): err= 0: pid=83555: Thu May 16 18:41:59 2024 00:20:47.782 read: IOPS=208, BW=834KiB/s (854kB/s)(8392KiB/10060msec) 00:20:47.782 slat (usec): min=4, max=8863, avg=34.68, stdev=401.48 00:20:47.782 clat (msec): min=6, max=153, avg=76.50, stdev=24.12 00:20:47.782 lat (msec): min=6, max=153, avg=76.53, stdev=24.12 00:20:47.782 clat percentiles (msec): 00:20:47.782 | 1.00th=[ 10], 5.00th=[ 45], 10.00th=[ 49], 20.00th=[ 59], 00:20:47.782 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:20:47.782 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 120], 00:20:47.782 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 153], 99.95th=[ 155], 00:20:47.782 | 99.99th=[ 155] 00:20:47.782 bw ( KiB/s): min= 512, max= 1392, per=4.00%, avg=832.80, stdev=194.68, samples=20 00:20:47.782 iops : min= 128, max= 348, avg=208.20, stdev=48.67, samples=20 00:20:47.782 lat (msec) : 10=1.43%, 20=0.86%, 50=9.20%, 100=70.31%, 250=18.21% 00:20:47.782 cpu : usr=41.75%, sys=1.81%, ctx=1186, majf=0, minf=9 00:20:47.782 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=73.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:47.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 complete : 0=0.0%, 4=90.1%, 8=7.9%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.782 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.782 filename0: (groupid=0, jobs=1): err= 0: pid=83556: Thu May 16 18:41:59 2024 00:20:47.782 read: IOPS=200, BW=802KiB/s (821kB/s)(8052KiB/10040msec) 00:20:47.782 slat (usec): min=5, max=8022, avg=27.62, stdev=252.96 00:20:47.782 clat (msec): min=12, max=160, avg=79.60, stdev=23.64 00:20:47.782 lat (msec): min=12, max=160, avg=79.63, stdev=23.65 00:20:47.782 clat percentiles (msec): 00:20:47.782 | 1.00th=[ 43], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:20:47.782 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:20:47.782 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 125], 00:20:47.782 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 159], 99.95th=[ 161], 00:20:47.782 | 99.99th=[ 161] 00:20:47.782 bw ( KiB/s): min= 528, max= 1152, per=3.84%, avg=798.80, stdev=157.67, samples=20 00:20:47.782 iops : min= 132, max= 288, avg=199.70, stdev=39.42, samples=20 00:20:47.782 lat (msec) : 20=0.79%, 50=10.23%, 100=69.75%, 250=19.23% 00:20:47.782 cpu : usr=42.78%, sys=2.19%, ctx=1329, majf=0, minf=9 00:20:47.782 IO depths : 1=0.1%, 2=3.4%, 4=14.0%, 8=68.3%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:47.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 complete : 0=0.0%, 4=91.2%, 8=5.7%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 issued rwts: total=2013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.782 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.782 filename0: (groupid=0, jobs=1): err= 0: pid=83557: Thu May 16 18:41:59 2024 00:20:47.782 read: IOPS=223, BW=894KiB/s (915kB/s)(8968KiB/10032msec) 00:20:47.782 slat (usec): min=4, max=8032, avg=29.55, stdev=338.20 00:20:47.782 clat (msec): min=34, max=123, avg=71.44, stdev=19.64 00:20:47.782 lat (msec): min=34, max=123, avg=71.47, stdev=19.64 00:20:47.782 clat percentiles (msec): 00:20:47.782 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:20:47.782 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:47.782 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:20:47.782 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 122], 00:20:47.782 | 99.99th=[ 124] 00:20:47.782 bw ( KiB/s): min= 664, max= 1048, per=4.28%, avg=890.15, stdev=104.06, samples=20 00:20:47.782 iops : min= 166, max= 262, avg=222.50, stdev=25.97, samples=20 00:20:47.782 lat (msec) : 50=21.36%, 100=69.27%, 250=9.37% 00:20:47.782 cpu : usr=31.33%, sys=1.49%, ctx=872, majf=0, minf=9 00:20:47.782 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.6%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:47.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 complete : 0=0.0%, 4=87.4%, 8=12.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 issued rwts: total=2242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.782 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.782 filename0: (groupid=0, jobs=1): err= 0: pid=83558: Thu May 16 18:41:59 2024 00:20:47.782 read: IOPS=221, BW=888KiB/s (909kB/s)(8908KiB/10036msec) 00:20:47.782 slat (usec): min=4, max=2946, avg=14.54, stdev=62.37 00:20:47.782 clat (msec): min=32, max=125, avg=72.03, stdev=18.98 00:20:47.782 lat (msec): min=32, max=125, avg=72.04, stdev=18.98 00:20:47.782 clat percentiles (msec): 00:20:47.782 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:47.782 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:47.782 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 102], 95.00th=[ 108], 00:20:47.782 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 126], 99.95th=[ 126], 00:20:47.782 | 99.99th=[ 126] 00:20:47.782 bw ( KiB/s): min= 696, max= 1194, per=4.25%, avg=884.50, stdev=113.38, samples=20 00:20:47.782 iops : min= 174, max= 298, avg=221.10, stdev=28.27, samples=20 00:20:47.782 lat (msec) : 50=15.63%, 100=73.73%, 250=10.64% 00:20:47.782 cpu : usr=36.94%, sys=1.93%, ctx=1251, majf=0, minf=9 00:20:47.782 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:47.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.782 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.782 filename0: (groupid=0, jobs=1): err= 0: pid=83559: Thu May 16 18:41:59 2024 00:20:47.782 read: IOPS=226, BW=907KiB/s (928kB/s)(9084KiB/10020msec) 00:20:47.782 slat (usec): min=4, max=8032, avg=24.05, stdev=237.93 00:20:47.782 clat (msec): min=33, max=154, avg=70.45, stdev=19.65 00:20:47.782 lat (msec): min=33, max=154, avg=70.47, stdev=19.65 00:20:47.782 clat percentiles (msec): 00:20:47.782 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:20:47.782 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:20:47.782 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 101], 95.00th=[ 108], 00:20:47.782 | 99.00th=[ 121], 99.50th=[ 131], 99.90th=[ 131], 99.95th=[ 155], 00:20:47.782 | 99.99th=[ 155] 00:20:47.782 bw ( KiB/s): min= 664, max= 1040, per=4.34%, avg=903.20, stdev=103.69, samples=20 00:20:47.782 iops : min= 166, max= 260, avg=225.80, stdev=25.92, samples=20 00:20:47.782 lat (msec) : 50=21.58%, 100=68.52%, 250=9.91% 00:20:47.782 cpu : usr=36.02%, sys=1.80%, ctx=997, majf=0, minf=9 00:20:47.782 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:47.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.782 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.782 filename0: (groupid=0, jobs=1): err= 0: pid=83560: Thu May 16 18:41:59 2024 00:20:47.782 read: IOPS=223, BW=895KiB/s (916kB/s)(8964KiB/10019msec) 00:20:47.782 slat (usec): min=7, max=8030, avg=30.82, stdev=276.42 00:20:47.782 clat (msec): min=20, max=145, avg=71.32, stdev=20.80 00:20:47.782 lat (msec): min=20, max=145, avg=71.35, stdev=20.81 00:20:47.782 clat percentiles (msec): 00:20:47.782 | 1.00th=[ 40], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 51], 00:20:47.782 | 30.00th=[ 58], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:20:47.782 | 70.00th=[ 80], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 110], 00:20:47.782 | 99.00th=[ 124], 99.50th=[ 124], 99.90th=[ 138], 99.95th=[ 146], 00:20:47.782 | 99.99th=[ 146] 00:20:47.782 bw ( KiB/s): min= 640, max= 1024, per=4.29%, avg=892.50, stdev=132.23, samples=20 00:20:47.782 iops : min= 160, max= 256, avg=223.10, stdev=33.06, samples=20 00:20:47.782 lat (msec) : 50=19.10%, 100=67.51%, 250=13.39% 00:20:47.782 cpu : usr=43.02%, sys=2.26%, ctx=1382, majf=0, minf=9 00:20:47.782 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=80.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:47.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.782 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.782 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.782 filename0: (groupid=0, jobs=1): err= 0: pid=83561: Thu May 16 18:41:59 2024 00:20:47.782 read: IOPS=229, BW=917KiB/s (938kB/s)(9176KiB/10012msec) 00:20:47.782 slat (usec): min=8, max=8029, avg=27.11, stdev=289.60 00:20:47.782 clat (msec): min=13, max=123, avg=69.70, stdev=19.88 00:20:47.782 lat (msec): min=13, max=123, avg=69.73, stdev=19.89 00:20:47.782 clat percentiles (msec): 00:20:47.782 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 49], 00:20:47.782 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:20:47.783 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:20:47.783 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 124], 00:20:47.783 | 99.99th=[ 124] 00:20:47.783 bw ( KiB/s): min= 688, max= 1024, per=4.40%, avg=914.05, stdev=101.61, samples=20 00:20:47.783 iops : min= 172, max= 256, avg=228.50, stdev=25.40, samples=20 00:20:47.783 lat (msec) : 20=0.26%, 50=23.50%, 100=66.70%, 250=9.55% 00:20:47.783 cpu : usr=33.44%, sys=1.65%, ctx=900, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.783 filename0: (groupid=0, jobs=1): err= 0: pid=83562: Thu May 16 18:41:59 2024 00:20:47.783 read: IOPS=229, BW=917KiB/s (939kB/s)(9172KiB/10005msec) 00:20:47.783 slat (usec): min=7, max=8032, avg=23.44, stdev=205.23 00:20:47.783 clat (msec): min=6, max=133, avg=69.71, stdev=19.59 00:20:47.783 lat (msec): min=6, max=133, avg=69.73, stdev=19.58 00:20:47.783 clat percentiles (msec): 00:20:47.783 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:20:47.783 | 30.00th=[ 59], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:20:47.783 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:20:47.783 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 129], 99.95th=[ 134], 00:20:47.783 | 99.99th=[ 134] 00:20:47.783 bw ( KiB/s): min= 712, max= 1024, per=4.36%, avg=906.11, stdev=99.09, samples=19 00:20:47.783 iops : min= 178, max= 256, avg=226.53, stdev=24.77, samples=19 00:20:47.783 lat (msec) : 10=0.13%, 20=0.57%, 50=22.76%, 100=67.51%, 250=9.03% 00:20:47.783 cpu : usr=33.39%, sys=1.52%, ctx=894, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=2293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.783 filename1: (groupid=0, jobs=1): err= 0: pid=83563: Thu May 16 18:41:59 2024 00:20:47.783 read: IOPS=236, BW=946KiB/s (969kB/s)(9460KiB/10001msec) 00:20:47.783 slat (usec): min=4, max=4023, avg=19.01, stdev=84.96 00:20:47.783 clat (usec): min=938, max=143806, avg=67573.20, stdev=23921.83 00:20:47.783 lat (usec): min=944, max=143821, avg=67592.21, stdev=23920.23 00:20:47.783 clat percentiles (usec): 00:20:47.783 | 1.00th=[ 1270], 5.00th=[ 35914], 10.00th=[ 45351], 20.00th=[ 48497], 00:20:47.783 | 30.00th=[ 55837], 40.00th=[ 62129], 50.00th=[ 70779], 60.00th=[ 71828], 00:20:47.783 | 70.00th=[ 77071], 80.00th=[ 83362], 90.00th=[101188], 95.00th=[107480], 00:20:47.783 | 99.00th=[116917], 99.50th=[127402], 99.90th=[127402], 99.95th=[143655], 00:20:47.783 | 99.99th=[143655] 00:20:47.783 bw ( KiB/s): min= 672, max= 1024, per=4.31%, avg=896.84, stdev=109.44, samples=19 00:20:47.783 iops : min= 168, max= 256, avg=224.11, stdev=27.39, samples=19 00:20:47.783 lat (usec) : 1000=0.17% 00:20:47.783 lat (msec) : 2=3.30%, 4=0.51%, 10=0.30%, 20=0.30%, 50=19.03% 00:20:47.783 lat (msec) : 100=65.84%, 250=10.57% 00:20:47.783 cpu : usr=41.39%, sys=2.33%, ctx=1333, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=0.5%, 4=2.2%, 8=81.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=2365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.783 filename1: (groupid=0, jobs=1): err= 0: pid=83564: Thu May 16 18:41:59 2024 00:20:47.783 read: IOPS=215, BW=860KiB/s (881kB/s)(8656KiB/10061msec) 00:20:47.783 slat (usec): min=3, max=4025, avg=17.05, stdev=111.73 00:20:47.783 clat (msec): min=13, max=150, avg=74.26, stdev=20.94 00:20:47.783 lat (msec): min=13, max=150, avg=74.27, stdev=20.93 00:20:47.783 clat percentiles (msec): 00:20:47.783 | 1.00th=[ 23], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:20:47.783 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:20:47.783 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 104], 95.00th=[ 109], 00:20:47.783 | 99.00th=[ 136], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:20:47.783 | 99.99th=[ 150] 00:20:47.783 bw ( KiB/s): min= 624, max= 1152, per=4.13%, avg=859.20, stdev=131.98, samples=20 00:20:47.783 iops : min= 156, max= 288, avg=214.80, stdev=32.99, samples=20 00:20:47.783 lat (msec) : 20=0.74%, 50=12.48%, 100=74.08%, 250=12.71% 00:20:47.783 cpu : usr=43.52%, sys=2.24%, ctx=1422, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=2164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.783 filename1: (groupid=0, jobs=1): err= 0: pid=83565: Thu May 16 18:41:59 2024 00:20:47.783 read: IOPS=222, BW=892KiB/s (913kB/s)(8924KiB/10008msec) 00:20:47.783 slat (usec): min=4, max=11034, avg=55.29, stdev=572.10 00:20:47.783 clat (msec): min=9, max=134, avg=71.55, stdev=19.86 00:20:47.783 lat (msec): min=9, max=134, avg=71.60, stdev=19.86 00:20:47.783 clat percentiles (msec): 00:20:47.783 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:20:47.783 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:20:47.783 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:20:47.783 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:47.783 | 99.99th=[ 136] 00:20:47.783 bw ( KiB/s): min= 640, max= 1024, per=4.24%, avg=882.95, stdev=109.57, samples=19 00:20:47.783 iops : min= 160, max= 256, avg=220.74, stdev=27.39, samples=19 00:20:47.783 lat (msec) : 10=0.27%, 50=20.66%, 100=69.92%, 250=9.14% 00:20:47.783 cpu : usr=31.21%, sys=1.67%, ctx=860, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=0.2%, 4=1.0%, 8=82.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.783 filename1: (groupid=0, jobs=1): err= 0: pid=83566: Thu May 16 18:41:59 2024 00:20:47.783 read: IOPS=191, BW=767KiB/s (785kB/s)(7696KiB/10040msec) 00:20:47.783 slat (usec): min=7, max=8031, avg=31.96, stdev=371.51 00:20:47.783 clat (msec): min=39, max=166, avg=83.23, stdev=21.12 00:20:47.783 lat (msec): min=39, max=166, avg=83.26, stdev=21.12 00:20:47.783 clat percentiles (msec): 00:20:47.783 | 1.00th=[ 48], 5.00th=[ 51], 10.00th=[ 60], 20.00th=[ 69], 00:20:47.783 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 84], 00:20:47.783 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 110], 95.00th=[ 126], 00:20:47.783 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 167], 99.95th=[ 167], 00:20:47.783 | 99.99th=[ 167] 00:20:47.783 bw ( KiB/s): min= 512, max= 1008, per=3.67%, avg=763.20, stdev=138.91, samples=20 00:20:47.783 iops : min= 128, max= 252, avg=190.80, stdev=34.73, samples=20 00:20:47.783 lat (msec) : 50=4.42%, 100=74.58%, 250=21.00% 00:20:47.783 cpu : usr=37.51%, sys=1.85%, ctx=1124, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=4.5%, 4=18.2%, 8=63.5%, 16=13.8%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=92.6%, 8=3.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.783 filename1: (groupid=0, jobs=1): err= 0: pid=83567: Thu May 16 18:41:59 2024 00:20:47.783 read: IOPS=223, BW=895KiB/s (917kB/s)(8976KiB/10028msec) 00:20:47.783 slat (usec): min=4, max=8023, avg=23.21, stdev=253.67 00:20:47.783 clat (msec): min=30, max=143, avg=71.38, stdev=19.73 00:20:47.783 lat (msec): min=30, max=144, avg=71.40, stdev=19.72 00:20:47.783 clat percentiles (msec): 00:20:47.783 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:20:47.783 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 73], 00:20:47.783 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 108], 00:20:47.783 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 142], 99.95th=[ 142], 00:20:47.783 | 99.99th=[ 144] 00:20:47.783 bw ( KiB/s): min= 632, max= 1024, per=4.29%, avg=891.20, stdev=99.32, samples=20 00:20:47.783 iops : min= 158, max= 256, avg=222.80, stdev=24.83, samples=20 00:20:47.783 lat (msec) : 50=18.94%, 100=71.08%, 250=9.98% 00:20:47.783 cpu : usr=37.36%, sys=2.00%, ctx=1178, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.783 filename1: (groupid=0, jobs=1): err= 0: pid=83568: Thu May 16 18:41:59 2024 00:20:47.783 read: IOPS=224, BW=897KiB/s (919kB/s)(8976KiB/10002msec) 00:20:47.783 slat (usec): min=4, max=8025, avg=26.06, stdev=253.66 00:20:47.783 clat (msec): min=6, max=130, avg=71.21, stdev=20.21 00:20:47.783 lat (msec): min=6, max=130, avg=71.24, stdev=20.21 00:20:47.783 clat percentiles (msec): 00:20:47.783 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 51], 00:20:47.783 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:20:47.783 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 104], 95.00th=[ 108], 00:20:47.783 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 131], 00:20:47.783 | 99.99th=[ 131] 00:20:47.783 bw ( KiB/s): min= 672, max= 976, per=4.26%, avg=885.37, stdev=95.14, samples=19 00:20:47.783 iops : min= 168, max= 244, avg=221.32, stdev=23.78, samples=19 00:20:47.783 lat (msec) : 10=0.31%, 20=0.27%, 50=19.52%, 100=69.25%, 250=10.65% 00:20:47.783 cpu : usr=35.03%, sys=1.58%, ctx=979, majf=0, minf=9 00:20:47.783 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=82.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:47.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.783 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.783 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename1: (groupid=0, jobs=1): err= 0: pid=83569: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=222, BW=888KiB/s (909kB/s)(8912KiB/10036msec) 00:20:47.784 slat (usec): min=4, max=8050, avg=38.86, stdev=415.72 00:20:47.784 clat (msec): min=26, max=131, avg=71.87, stdev=18.87 00:20:47.784 lat (msec): min=26, max=131, avg=71.91, stdev=18.89 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:47.784 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:20:47.784 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:20:47.784 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:20:47.784 | 99.99th=[ 132] 00:20:47.784 bw ( KiB/s): min= 720, max= 1008, per=4.25%, avg=884.80, stdev=92.98, samples=20 00:20:47.784 iops : min= 180, max= 252, avg=221.20, stdev=23.25, samples=20 00:20:47.784 lat (msec) : 50=16.56%, 100=73.79%, 250=9.65% 00:20:47.784 cpu : usr=33.32%, sys=2.00%, ctx=965, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename1: (groupid=0, jobs=1): err= 0: pid=83570: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=217, BW=871KiB/s (892kB/s)(8740KiB/10036msec) 00:20:47.784 slat (usec): min=4, max=8025, avg=32.54, stdev=382.83 00:20:47.784 clat (msec): min=27, max=129, avg=73.32, stdev=19.12 00:20:47.784 lat (msec): min=27, max=129, avg=73.35, stdev=19.14 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:20:47.784 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 74], 00:20:47.784 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 108], 00:20:47.784 | 99.00th=[ 118], 99.50th=[ 120], 99.90th=[ 130], 99.95th=[ 130], 00:20:47.784 | 99.99th=[ 130] 00:20:47.784 bw ( KiB/s): min= 656, max= 992, per=4.17%, avg=867.60, stdev=94.71, samples=20 00:20:47.784 iops : min= 164, max= 248, avg=216.90, stdev=23.68, samples=20 00:20:47.784 lat (msec) : 50=17.03%, 100=72.17%, 250=10.80% 00:20:47.784 cpu : usr=31.49%, sys=1.36%, ctx=875, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=81.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=87.8%, 8=11.8%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename2: (groupid=0, jobs=1): err= 0: pid=83571: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=213, BW=853KiB/s (874kB/s)(8556KiB/10029msec) 00:20:47.784 slat (usec): min=4, max=4028, avg=25.28, stdev=170.10 00:20:47.784 clat (msec): min=36, max=156, avg=74.87, stdev=21.72 00:20:47.784 lat (msec): min=36, max=156, avg=74.89, stdev=21.72 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:47.784 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:47.784 | 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 112], 00:20:47.784 | 99.00th=[ 132], 99.50th=[ 133], 99.90th=[ 144], 99.95th=[ 157], 00:20:47.784 | 99.99th=[ 157] 00:20:47.784 bw ( KiB/s): min= 528, max= 1024, per=4.08%, avg=849.25, stdev=152.34, samples=20 00:20:47.784 iops : min= 132, max= 256, avg=212.30, stdev=38.08, samples=20 00:20:47.784 lat (msec) : 50=16.18%, 100=66.76%, 250=17.06% 00:20:47.784 cpu : usr=41.73%, sys=1.90%, ctx=1218, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=2.1%, 4=8.2%, 8=75.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename2: (groupid=0, jobs=1): err= 0: pid=83572: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=213, BW=853KiB/s (874kB/s)(8580KiB/10056msec) 00:20:47.784 slat (usec): min=4, max=8025, avg=18.15, stdev=173.06 00:20:47.784 clat (msec): min=3, max=144, avg=74.91, stdev=24.40 00:20:47.784 lat (msec): min=3, max=144, avg=74.93, stdev=24.40 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 12], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:20:47.784 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:20:47.784 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:20:47.784 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:47.784 | 99.99th=[ 144] 00:20:47.784 bw ( KiB/s): min= 512, max= 1248, per=4.09%, avg=851.60, stdev=173.58, samples=20 00:20:47.784 iops : min= 128, max= 312, avg=212.90, stdev=43.40, samples=20 00:20:47.784 lat (msec) : 4=0.65%, 20=1.49%, 50=14.27%, 100=68.25%, 250=15.34% 00:20:47.784 cpu : usr=35.27%, sys=1.78%, ctx=1035, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=76.9%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=89.0%, 8=9.7%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename2: (groupid=0, jobs=1): err= 0: pid=83573: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=204, BW=817KiB/s (837kB/s)(8200KiB/10036msec) 00:20:47.784 slat (usec): min=4, max=8180, avg=35.25, stdev=366.44 00:20:47.784 clat (msec): min=36, max=148, avg=78.08, stdev=21.39 00:20:47.784 lat (msec): min=36, max=148, avg=78.12, stdev=21.38 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:20:47.784 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 81], 00:20:47.784 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 112], 00:20:47.784 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:20:47.784 | 99.99th=[ 148] 00:20:47.784 bw ( KiB/s): min= 512, max= 1026, per=3.91%, avg=813.70, stdev=152.03, samples=20 00:20:47.784 iops : min= 128, max= 256, avg=203.40, stdev=37.97, samples=20 00:20:47.784 lat (msec) : 50=11.02%, 100=70.44%, 250=18.54% 00:20:47.784 cpu : usr=35.19%, sys=1.80%, ctx=963, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=2.5%, 4=9.9%, 8=72.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=90.2%, 8=7.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename2: (groupid=0, jobs=1): err= 0: pid=83574: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=207, BW=830KiB/s (850kB/s)(8332KiB/10039msec) 00:20:47.784 slat (usec): min=7, max=4021, avg=15.93, stdev=87.99 00:20:47.784 clat (msec): min=36, max=155, avg=76.98, stdev=22.66 00:20:47.784 lat (msec): min=36, max=155, avg=77.00, stdev=22.66 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 42], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 56], 00:20:47.784 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:20:47.784 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 120], 00:20:47.784 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 155], 00:20:47.784 | 99.99th=[ 155] 00:20:47.784 bw ( KiB/s): min= 512, max= 1000, per=3.97%, avg=826.85, stdev=158.70, samples=20 00:20:47.784 iops : min= 128, max= 250, avg=206.70, stdev=39.67, samples=20 00:20:47.784 lat (msec) : 50=12.19%, 100=69.42%, 250=18.39% 00:20:47.784 cpu : usr=39.85%, sys=1.84%, ctx=1427, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=76.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=89.1%, 8=9.6%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename2: (groupid=0, jobs=1): err= 0: pid=83575: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=220, BW=881KiB/s (902kB/s)(8856KiB/10050msec) 00:20:47.784 slat (usec): min=6, max=8022, avg=23.66, stdev=218.83 00:20:47.784 clat (msec): min=11, max=143, avg=72.50, stdev=21.69 00:20:47.784 lat (msec): min=11, max=144, avg=72.53, stdev=21.70 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 12], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:20:47.784 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:20:47.784 | 70.00th=[ 81], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 109], 00:20:47.784 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 134], 99.95th=[ 144], 00:20:47.784 | 99.99th=[ 144] 00:20:47.784 bw ( KiB/s): min= 512, max= 1152, per=4.22%, avg=878.90, stdev=141.96, samples=20 00:20:47.784 iops : min= 128, max= 288, avg=219.70, stdev=35.48, samples=20 00:20:47.784 lat (msec) : 20=1.45%, 50=16.40%, 100=69.78%, 250=12.38% 00:20:47.784 cpu : usr=37.62%, sys=2.02%, ctx=1213, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.784 filename2: (groupid=0, jobs=1): err= 0: pid=83576: Thu May 16 18:41:59 2024 00:20:47.784 read: IOPS=201, BW=807KiB/s (826kB/s)(8080KiB/10012msec) 00:20:47.784 slat (usec): min=4, max=8043, avg=34.34, stdev=398.52 00:20:47.784 clat (msec): min=15, max=165, avg=79.07, stdev=22.11 00:20:47.784 lat (msec): min=15, max=165, avg=79.10, stdev=22.10 00:20:47.784 clat percentiles (msec): 00:20:47.784 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:47.784 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:20:47.784 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 109], 95.00th=[ 116], 00:20:47.784 | 99.00th=[ 127], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 165], 00:20:47.784 | 99.99th=[ 165] 00:20:47.784 bw ( KiB/s): min= 528, max= 1072, per=3.87%, avg=804.00, stdev=150.33, samples=20 00:20:47.784 iops : min= 132, max= 268, avg=201.00, stdev=37.58, samples=20 00:20:47.784 lat (msec) : 20=0.50%, 50=12.48%, 100=67.13%, 250=19.90% 00:20:47.784 cpu : usr=38.70%, sys=1.89%, ctx=1123, majf=0, minf=9 00:20:47.784 IO depths : 1=0.1%, 2=3.5%, 4=14.2%, 8=68.3%, 16=14.0%, 32=0.0%, >=64=0.0% 00:20:47.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 complete : 0=0.0%, 4=91.0%, 8=5.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.784 issued rwts: total=2020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.785 filename2: (groupid=0, jobs=1): err= 0: pid=83577: Thu May 16 18:41:59 2024 00:20:47.785 read: IOPS=208, BW=835KiB/s (855kB/s)(8376KiB/10037msec) 00:20:47.785 slat (usec): min=3, max=8044, avg=21.11, stdev=175.64 00:20:47.785 clat (msec): min=36, max=172, avg=76.55, stdev=21.15 00:20:47.785 lat (msec): min=36, max=172, avg=76.57, stdev=21.16 00:20:47.785 clat percentiles (msec): 00:20:47.785 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:20:47.785 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:20:47.785 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 112], 00:20:47.785 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 174], 00:20:47.785 | 99.99th=[ 174] 00:20:47.785 bw ( KiB/s): min= 528, max= 1026, per=4.00%, avg=831.30, stdev=139.97, samples=20 00:20:47.785 iops : min= 132, max= 256, avg=207.80, stdev=34.96, samples=20 00:20:47.785 lat (msec) : 50=13.94%, 100=68.72%, 250=17.34% 00:20:47.785 cpu : usr=35.00%, sys=2.01%, ctx=973, majf=0, minf=9 00:20:47.785 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=74.8%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:47.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.785 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.785 issued rwts: total=2094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.785 filename2: (groupid=0, jobs=1): err= 0: pid=83578: Thu May 16 18:41:59 2024 00:20:47.785 read: IOPS=228, BW=914KiB/s (936kB/s)(9144KiB/10005msec) 00:20:47.785 slat (usec): min=5, max=8026, avg=32.33, stdev=301.30 00:20:47.785 clat (msec): min=4, max=133, avg=69.91, stdev=20.08 00:20:47.785 lat (msec): min=4, max=133, avg=69.94, stdev=20.08 00:20:47.785 clat percentiles (msec): 00:20:47.785 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 51], 00:20:47.785 | 30.00th=[ 57], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 72], 00:20:47.785 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 102], 95.00th=[ 108], 00:20:47.785 | 99.00th=[ 117], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:20:47.785 | 99.99th=[ 133] 00:20:47.785 bw ( KiB/s): min= 712, max= 1024, per=4.33%, avg=901.95, stdev=104.90, samples=19 00:20:47.785 iops : min= 178, max= 256, avg=225.47, stdev=26.23, samples=19 00:20:47.785 lat (msec) : 10=0.26%, 20=0.39%, 50=18.64%, 100=70.56%, 250=10.15% 00:20:47.785 cpu : usr=39.55%, sys=1.81%, ctx=1424, majf=0, minf=9 00:20:47.785 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:47.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.785 complete : 0=0.0%, 4=87.1%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:47.785 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:47.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:47.785 00:20:47.785 Run status group 0 (all jobs): 00:20:47.785 READ: bw=20.3MiB/s (21.3MB/s), 767KiB/s-946KiB/s (785kB/s-969kB/s), io=204MiB (214MB), run=10001-10061msec 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 bdev_null0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 [2024-05-16 18:41:59.713484] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 bdev_null1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:47.785 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.786 { 00:20:47.786 "params": { 00:20:47.786 "name": "Nvme$subsystem", 00:20:47.786 "trtype": "$TEST_TRANSPORT", 00:20:47.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.786 "adrfam": "ipv4", 00:20:47.786 "trsvcid": "$NVMF_PORT", 00:20:47.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.786 "hdgst": ${hdgst:-false}, 00:20:47.786 "ddgst": ${ddgst:-false} 00:20:47.786 }, 00:20:47.786 "method": "bdev_nvme_attach_controller" 00:20:47.786 } 00:20:47.786 EOF 00:20:47.786 )") 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:47.786 { 00:20:47.786 "params": { 00:20:47.786 "name": "Nvme$subsystem", 00:20:47.786 "trtype": "$TEST_TRANSPORT", 00:20:47.786 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:47.786 "adrfam": "ipv4", 00:20:47.786 "trsvcid": "$NVMF_PORT", 00:20:47.786 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:47.786 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:47.786 "hdgst": ${hdgst:-false}, 00:20:47.786 "ddgst": ${ddgst:-false} 00:20:47.786 }, 00:20:47.786 "method": "bdev_nvme_attach_controller" 00:20:47.786 } 00:20:47.786 EOF 00:20:47.786 )") 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:47.786 "params": { 00:20:47.786 "name": "Nvme0", 00:20:47.786 "trtype": "tcp", 00:20:47.786 "traddr": "10.0.0.2", 00:20:47.786 "adrfam": "ipv4", 00:20:47.786 "trsvcid": "4420", 00:20:47.786 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:47.786 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:47.786 "hdgst": false, 00:20:47.786 "ddgst": false 00:20:47.786 }, 00:20:47.786 "method": "bdev_nvme_attach_controller" 00:20:47.786 },{ 00:20:47.786 "params": { 00:20:47.786 "name": "Nvme1", 00:20:47.786 "trtype": "tcp", 00:20:47.786 "traddr": "10.0.0.2", 00:20:47.786 "adrfam": "ipv4", 00:20:47.786 "trsvcid": "4420", 00:20:47.786 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.786 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.786 "hdgst": false, 00:20:47.786 "ddgst": false 00:20:47.786 }, 00:20:47.786 "method": "bdev_nvme_attach_controller" 00:20:47.786 }' 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:47.786 18:41:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:47.786 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:47.786 ... 00:20:47.786 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:47.786 ... 00:20:47.786 fio-3.35 00:20:47.786 Starting 4 threads 00:20:53.071 00:20:53.071 filename0: (groupid=0, jobs=1): err= 0: pid=83730: Thu May 16 18:42:05 2024 00:20:53.071 read: IOPS=2015, BW=15.7MiB/s (16.5MB/s)(78.8MiB/5002msec) 00:20:53.072 slat (usec): min=6, max=314, avg=13.44, stdev= 5.78 00:20:53.072 clat (usec): min=849, max=7001, avg=3915.74, stdev=432.98 00:20:53.072 lat (usec): min=862, max=7015, avg=3929.18, stdev=433.33 00:20:53.072 clat percentiles (usec): 00:20:53.072 | 1.00th=[ 2147], 5.00th=[ 3392], 10.00th=[ 3425], 20.00th=[ 3523], 00:20:53.072 | 30.00th=[ 3687], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4080], 00:20:53.072 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4359], 95.00th=[ 4490], 00:20:53.072 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5407], 99.95th=[ 6063], 00:20:53.072 | 99.99th=[ 6259] 00:20:53.072 bw ( KiB/s): min=15104, max=18448, per=22.55%, avg=16252.33, stdev=1344.29, samples=9 00:20:53.072 iops : min= 1888, max= 2306, avg=2031.44, stdev=168.05, samples=9 00:20:53.072 lat (usec) : 1000=0.07% 00:20:53.072 lat (msec) : 2=0.80%, 4=43.55%, 10=55.58% 00:20:53.072 cpu : usr=91.16%, sys=7.78%, ctx=68, majf=0, minf=9 00:20:53.072 IO depths : 1=0.1%, 2=24.1%, 4=50.6%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 issued rwts: total=10080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:53.072 filename0: (groupid=0, jobs=1): err= 0: pid=83731: Thu May 16 18:42:05 2024 00:20:53.072 read: IOPS=2457, BW=19.2MiB/s (20.1MB/s)(96.0MiB/5001msec) 00:20:53.072 slat (nsec): min=6925, max=61837, avg=13763.41, stdev=3953.70 00:20:53.072 clat (usec): min=844, max=6963, avg=3215.95, stdev=821.94 00:20:53.072 lat (usec): min=852, max=6989, avg=3229.72, stdev=821.57 00:20:53.072 clat percentiles (usec): 00:20:53.072 | 1.00th=[ 1778], 5.00th=[ 1942], 10.00th=[ 2008], 20.00th=[ 2212], 00:20:53.072 | 30.00th=[ 2507], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3654], 00:20:53.072 | 70.00th=[ 3818], 80.00th=[ 3949], 90.00th=[ 4080], 95.00th=[ 4228], 00:20:53.072 | 99.00th=[ 4555], 99.50th=[ 4752], 99.90th=[ 5276], 99.95th=[ 5342], 00:20:53.072 | 99.99th=[ 5407] 00:20:53.072 bw ( KiB/s): min=17280, max=21344, per=27.14%, avg=19559.11, stdev=1728.93, samples=9 00:20:53.072 iops : min= 2160, max= 2668, avg=2444.89, stdev=216.12, samples=9 00:20:53.072 lat (usec) : 1000=0.16% 00:20:53.072 lat (msec) : 2=9.60%, 4=74.43%, 10=15.81% 00:20:53.072 cpu : usr=90.98%, sys=8.12%, ctx=7, majf=0, minf=0 00:20:53.072 IO depths : 1=0.1%, 2=8.0%, 4=59.5%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 issued rwts: total=12290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:53.072 filename1: (groupid=0, jobs=1): err= 0: pid=83732: Thu May 16 18:42:05 2024 00:20:53.072 read: IOPS=2111, BW=16.5MiB/s (17.3MB/s)(82.5MiB/5002msec) 00:20:53.072 slat (nsec): min=6673, max=55849, avg=12033.56, stdev=4044.92 00:20:53.072 clat (usec): min=582, max=8074, avg=3745.19, stdev=689.41 00:20:53.072 lat (usec): min=590, max=8097, avg=3757.23, stdev=690.39 00:20:53.072 clat percentiles (usec): 00:20:53.072 | 1.00th=[ 1188], 5.00th=[ 1991], 10.00th=[ 3195], 20.00th=[ 3458], 00:20:53.072 | 30.00th=[ 3556], 40.00th=[ 3720], 50.00th=[ 4015], 60.00th=[ 4047], 00:20:53.072 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4424], 00:20:53.072 | 99.00th=[ 4621], 99.50th=[ 4686], 99.90th=[ 5080], 99.95th=[ 6521], 00:20:53.072 | 99.99th=[ 6521] 00:20:53.072 bw ( KiB/s): min=15104, max=20928, per=23.74%, avg=17111.11, stdev=2345.61, samples=9 00:20:53.072 iops : min= 1888, max= 2616, avg=2138.89, stdev=293.20, samples=9 00:20:53.072 lat (usec) : 750=0.16%, 1000=0.27% 00:20:53.072 lat (msec) : 2=4.62%, 4=44.30%, 10=50.65% 00:20:53.072 cpu : usr=91.88%, sys=7.36%, ctx=6, majf=0, minf=0 00:20:53.072 IO depths : 1=0.1%, 2=20.6%, 4=52.9%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 issued rwts: total=10561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:53.072 filename1: (groupid=0, jobs=1): err= 0: pid=83733: Thu May 16 18:42:05 2024 00:20:53.072 read: IOPS=2426, BW=19.0MiB/s (19.9MB/s)(94.8MiB/5001msec) 00:20:53.072 slat (nsec): min=6864, max=54410, avg=13419.99, stdev=3748.32 00:20:53.072 clat (usec): min=1020, max=5443, avg=3258.62, stdev=825.89 00:20:53.072 lat (usec): min=1033, max=5460, avg=3272.04, stdev=826.38 00:20:53.072 clat percentiles (usec): 00:20:53.072 | 1.00th=[ 1893], 5.00th=[ 1958], 10.00th=[ 2008], 20.00th=[ 2245], 00:20:53.072 | 30.00th=[ 2606], 40.00th=[ 3392], 50.00th=[ 3523], 60.00th=[ 3687], 00:20:53.072 | 70.00th=[ 3851], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4293], 00:20:53.072 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 5407], 00:20:53.072 | 99.99th=[ 5407] 00:20:53.072 bw ( KiB/s): min=16160, max=21424, per=26.76%, avg=19290.67, stdev=2050.76, samples=9 00:20:53.072 iops : min= 2020, max= 2678, avg=2411.33, stdev=256.35, samples=9 00:20:53.072 lat (msec) : 2=9.11%, 4=73.24%, 10=17.64% 00:20:53.072 cpu : usr=92.26%, sys=6.86%, ctx=39, majf=0, minf=0 00:20:53.072 IO depths : 1=0.1%, 2=8.6%, 4=59.0%, 8=32.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.072 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 complete : 0=0.0%, 4=96.8%, 8=3.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.072 issued rwts: total=12136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.072 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:53.072 00:20:53.072 Run status group 0 (all jobs): 00:20:53.072 READ: bw=70.4MiB/s (73.8MB/s), 15.7MiB/s-19.2MiB/s (16.5MB/s-20.1MB/s), io=352MiB (369MB), run=5001-5002msec 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.072 00:20:53.072 real 0m23.661s 00:20:53.072 user 2m3.957s 00:20:53.072 sys 0m8.192s 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:53.072 18:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:53.072 ************************************ 00:20:53.072 END TEST fio_dif_rand_params 00:20:53.072 ************************************ 00:20:53.072 18:42:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:53.072 18:42:05 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:53.072 18:42:05 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:53.072 18:42:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:53.072 ************************************ 00:20:53.072 START TEST fio_dif_digest 00:20:53.072 ************************************ 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:53.072 bdev_null0 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.072 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:53.073 [2024-05-16 18:42:05.915998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.073 { 00:20:53.073 "params": { 00:20:53.073 "name": "Nvme$subsystem", 00:20:53.073 "trtype": "$TEST_TRANSPORT", 00:20:53.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.073 "adrfam": "ipv4", 00:20:53.073 "trsvcid": "$NVMF_PORT", 00:20:53.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.073 "hdgst": ${hdgst:-false}, 00:20:53.073 "ddgst": ${ddgst:-false} 00:20:53.073 }, 00:20:53.073 "method": "bdev_nvme_attach_controller" 00:20:53.073 } 00:20:53.073 EOF 00:20:53.073 )") 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:53.073 "params": { 00:20:53.073 "name": "Nvme0", 00:20:53.073 "trtype": "tcp", 00:20:53.073 "traddr": "10.0.0.2", 00:20:53.073 "adrfam": "ipv4", 00:20:53.073 "trsvcid": "4420", 00:20:53.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:53.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:53.073 "hdgst": true, 00:20:53.073 "ddgst": true 00:20:53.073 }, 00:20:53.073 "method": "bdev_nvme_attach_controller" 00:20:53.073 }' 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:53.073 18:42:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:53.073 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:53.073 ... 00:20:53.073 fio-3.35 00:20:53.073 Starting 3 threads 00:21:05.273 00:21:05.273 filename0: (groupid=0, jobs=1): err= 0: pid=83839: Thu May 16 18:42:16 2024 00:21:05.273 read: IOPS=236, BW=29.5MiB/s (31.0MB/s)(296MiB/10007msec) 00:21:05.273 slat (nsec): min=7118, max=47097, avg=15137.08, stdev=4848.19 00:21:05.273 clat (usec): min=10763, max=14435, avg=12664.78, stdev=504.82 00:21:05.273 lat (usec): min=10777, max=14466, avg=12679.91, stdev=505.26 00:21:05.273 clat percentiles (usec): 00:21:05.273 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[12125], 00:21:05.273 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:21:05.273 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:21:05.273 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14353], 99.95th=[14484], 00:21:05.273 | 99.99th=[14484] 00:21:05.273 bw ( KiB/s): min=29184, max=31488, per=33.38%, avg=30275.37, stdev=738.23, samples=19 00:21:05.273 iops : min= 228, max= 246, avg=236.53, stdev= 5.77, samples=19 00:21:05.273 lat (msec) : 20=100.00% 00:21:05.273 cpu : usr=91.04%, sys=8.41%, ctx=27, majf=0, minf=0 00:21:05.273 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.273 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:05.273 filename0: (groupid=0, jobs=1): err= 0: pid=83840: Thu May 16 18:42:16 2024 00:21:05.273 read: IOPS=236, BW=29.5MiB/s (31.0MB/s)(296MiB/10008msec) 00:21:05.274 slat (nsec): min=7079, max=45045, avg=14349.78, stdev=4420.63 00:21:05.274 clat (usec): min=10759, max=14436, avg=12668.51, stdev=506.98 00:21:05.274 lat (usec): min=10773, max=14464, avg=12682.86, stdev=507.47 00:21:05.274 clat percentiles (usec): 00:21:05.274 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[12125], 00:21:05.274 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:21:05.274 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:21:05.274 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14353], 99.95th=[14484], 00:21:05.274 | 99.99th=[14484] 00:21:05.274 bw ( KiB/s): min=29184, max=31488, per=33.38%, avg=30275.37, stdev=738.23, samples=19 00:21:05.274 iops : min= 228, max= 246, avg=236.53, stdev= 5.77, samples=19 00:21:05.274 lat (msec) : 20=100.00% 00:21:05.274 cpu : usr=91.59%, sys=7.87%, ctx=10, majf=0, minf=0 00:21:05.274 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.274 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:05.274 filename0: (groupid=0, jobs=1): err= 0: pid=83841: Thu May 16 18:42:16 2024 00:21:05.274 read: IOPS=236, BW=29.5MiB/s (31.0MB/s)(296MiB/10010msec) 00:21:05.274 slat (nsec): min=6930, max=50425, avg=10277.60, stdev=4412.05 00:21:05.274 clat (usec): min=11818, max=14897, avg=12676.57, stdev=505.30 00:21:05.274 lat (usec): min=11826, max=14911, avg=12686.84, stdev=505.63 00:21:05.274 clat percentiles (usec): 00:21:05.274 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[12125], 00:21:05.274 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:21:05.274 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13435], 95.00th=[13566], 00:21:05.274 | 99.00th=[13960], 99.50th=[13960], 99.90th=[14877], 99.95th=[14877], 00:21:05.274 | 99.99th=[14877] 00:21:05.274 bw ( KiB/s): min=29184, max=31488, per=33.38%, avg=30272.11, stdev=736.29, samples=19 00:21:05.274 iops : min= 228, max= 246, avg=236.47, stdev= 5.74, samples=19 00:21:05.274 lat (msec) : 20=100.00% 00:21:05.274 cpu : usr=92.01%, sys=7.41%, ctx=18, majf=0, minf=0 00:21:05.274 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:05.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:05.274 issued rwts: total=2364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:05.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:05.274 00:21:05.274 Run status group 0 (all jobs): 00:21:05.274 READ: bw=88.6MiB/s (92.9MB/s), 29.5MiB/s-29.5MiB/s (31.0MB/s-31.0MB/s), io=887MiB (930MB), run=10007-10010msec 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.274 00:21:05.274 real 0m11.050s 00:21:05.274 user 0m28.164s 00:21:05.274 sys 0m2.653s 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:05.274 18:42:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:05.274 ************************************ 00:21:05.274 END TEST fio_dif_digest 00:21:05.274 ************************************ 00:21:05.274 18:42:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:05.274 18:42:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:05.274 18:42:16 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:05.274 18:42:16 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.274 rmmod nvme_tcp 00:21:05.274 rmmod nvme_fabrics 00:21:05.274 rmmod nvme_keyring 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83076 ']' 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83076 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 83076 ']' 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 83076 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83076 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:05.274 killing process with pid 83076 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83076' 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@965 -- # kill 83076 00:21:05.274 [2024-05-16 18:42:17.092053] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@970 -- # wait 83076 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:05.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.274 Waiting for block devices as requested 00:21:05.274 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.274 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.274 18:42:17 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:05.274 00:21:05.274 real 1m0.032s 00:21:05.274 user 3m48.387s 00:21:05.274 sys 0m19.564s 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:05.274 18:42:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:05.274 ************************************ 00:21:05.274 END TEST nvmf_dif 00:21:05.274 ************************************ 00:21:05.274 18:42:18 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:05.274 18:42:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:05.274 18:42:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:05.274 18:42:18 -- common/autotest_common.sh@10 -- # set +x 00:21:05.274 ************************************ 00:21:05.274 START TEST nvmf_abort_qd_sizes 00:21:05.274 ************************************ 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:05.274 * Looking for test storage... 00:21:05.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.274 18:42:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:05.275 Cannot find device "nvmf_tgt_br" 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:05.275 Cannot find device "nvmf_tgt_br2" 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:05.275 Cannot find device "nvmf_tgt_br" 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:05.275 Cannot find device "nvmf_tgt_br2" 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:05.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:05.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:05.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:05.275 00:21:05.275 --- 10.0.0.2 ping statistics --- 00:21:05.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.275 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:05.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:05.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:21:05.275 00:21:05.275 --- 10.0.0.3 ping statistics --- 00:21:05.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.275 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:05.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:05.275 00:21:05.275 --- 10.0.0.1 ping statistics --- 00:21:05.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.275 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:05.275 18:42:18 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:05.844 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.844 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:05.844 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84435 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84435 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 84435 ']' 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:05.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:05.844 18:42:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:06.102 [2024-05-16 18:42:19.356465] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:06.102 [2024-05-16 18:42:19.356556] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.102 [2024-05-16 18:42:19.499534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.361 [2024-05-16 18:42:19.615538] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.361 [2024-05-16 18:42:19.615601] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.361 [2024-05-16 18:42:19.615616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.361 [2024-05-16 18:42:19.615628] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.361 [2024-05-16 18:42:19.615638] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.361 [2024-05-16 18:42:19.615795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.361 [2024-05-16 18:42:19.616470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.361 [2024-05-16 18:42:19.616595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.361 [2024-05-16 18:42:19.616603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.361 [2024-05-16 18:42:19.690471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:06.928 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:06.929 18:42:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:06.929 ************************************ 00:21:06.929 START TEST spdk_target_abort 00:21:06.929 ************************************ 00:21:06.929 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:21:06.929 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:06.929 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:06.929 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.929 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.187 spdk_targetn1 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.187 [2024-05-16 18:42:20.496136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:07.187 [2024-05-16 18:42:20.528093] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:07.187 [2024-05-16 18:42:20.528356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:07.187 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:07.188 18:42:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:10.471 Initializing NVMe Controllers 00:21:10.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:10.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:10.471 Initialization complete. Launching workers. 00:21:10.471 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9520, failed: 0 00:21:10.471 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1071, failed to submit 8449 00:21:10.471 success 776, unsuccess 295, failed 0 00:21:10.471 18:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:10.472 18:42:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:13.755 Initializing NVMe Controllers 00:21:13.755 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:13.755 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:13.755 Initialization complete. Launching workers. 00:21:13.755 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8976, failed: 0 00:21:13.755 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1162, failed to submit 7814 00:21:13.755 success 387, unsuccess 775, failed 0 00:21:13.755 18:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:13.755 18:42:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:17.041 Initializing NVMe Controllers 00:21:17.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:17.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:17.041 Initialization complete. Launching workers. 00:21:17.041 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31673, failed: 0 00:21:17.041 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2329, failed to submit 29344 00:21:17.041 success 456, unsuccess 1873, failed 0 00:21:17.041 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:17.041 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.041 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:17.041 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.041 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:17.041 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.041 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84435 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 84435 ']' 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 84435 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84435 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:17.607 killing process with pid 84435 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84435' 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 84435 00:21:17.607 [2024-05-16 18:42:30.971003] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:17.607 18:42:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 84435 00:21:17.865 00:21:17.865 real 0m10.863s 00:21:17.865 user 0m43.157s 00:21:17.865 sys 0m2.349s 00:21:17.865 18:42:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:17.865 18:42:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:17.865 ************************************ 00:21:17.865 END TEST spdk_target_abort 00:21:17.865 ************************************ 00:21:17.865 18:42:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:17.866 18:42:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:17.866 18:42:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:17.866 18:42:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:17.866 ************************************ 00:21:17.866 START TEST kernel_target_abort 00:21:17.866 ************************************ 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:17.866 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:18.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.433 Waiting for block devices as requested 00:21:18.433 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:18.433 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:18.433 No valid GPT data, bailing 00:21:18.433 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:18.693 No valid GPT data, bailing 00:21:18.693 18:42:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:18.693 No valid GPT data, bailing 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:18.693 No valid GPT data, bailing 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:18.693 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add --hostid=8b07fcc8-e6b3-4152-8362-9695ab742add -a 10.0.0.1 -t tcp -s 4420 00:21:18.962 00:21:18.962 Discovery Log Number of Records 2, Generation counter 2 00:21:18.962 =====Discovery Log Entry 0====== 00:21:18.962 trtype: tcp 00:21:18.962 adrfam: ipv4 00:21:18.962 subtype: current discovery subsystem 00:21:18.962 treq: not specified, sq flow control disable supported 00:21:18.962 portid: 1 00:21:18.962 trsvcid: 4420 00:21:18.962 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:18.962 traddr: 10.0.0.1 00:21:18.962 eflags: none 00:21:18.962 sectype: none 00:21:18.962 =====Discovery Log Entry 1====== 00:21:18.962 trtype: tcp 00:21:18.963 adrfam: ipv4 00:21:18.963 subtype: nvme subsystem 00:21:18.963 treq: not specified, sq flow control disable supported 00:21:18.963 portid: 1 00:21:18.963 trsvcid: 4420 00:21:18.963 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:18.963 traddr: 10.0.0.1 00:21:18.963 eflags: none 00:21:18.963 sectype: none 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:18.963 18:42:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.310 Initializing NVMe Controllers 00:21:22.310 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:22.310 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:22.310 Initialization complete. Launching workers. 00:21:22.310 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31747, failed: 0 00:21:22.310 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31747, failed to submit 0 00:21:22.310 success 0, unsuccess 31747, failed 0 00:21:22.310 18:42:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:22.310 18:42:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:25.595 Initializing NVMe Controllers 00:21:25.595 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:25.595 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:25.595 Initialization complete. Launching workers. 00:21:25.595 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66146, failed: 0 00:21:25.595 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27395, failed to submit 38751 00:21:25.595 success 0, unsuccess 27395, failed 0 00:21:25.595 18:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:25.595 18:42:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:28.883 Initializing NVMe Controllers 00:21:28.883 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:28.883 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:28.883 Initialization complete. Launching workers. 00:21:28.883 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72552, failed: 0 00:21:28.883 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18126, failed to submit 54426 00:21:28.883 success 0, unsuccess 18126, failed 0 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:28.883 18:42:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:29.142 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:30.520 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:30.520 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:30.779 ************************************ 00:21:30.779 END TEST kernel_target_abort 00:21:30.779 ************************************ 00:21:30.779 00:21:30.779 real 0m12.714s 00:21:30.779 user 0m5.508s 00:21:30.779 sys 0m4.458s 00:21:30.779 18:42:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:30.779 18:42:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.779 rmmod nvme_tcp 00:21:30.779 rmmod nvme_fabrics 00:21:30.779 rmmod nvme_keyring 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.779 Process with pid 84435 is not found 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84435 ']' 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84435 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 84435 ']' 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 84435 00:21:30.779 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (84435) - No such process 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 84435 is not found' 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:30.779 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:31.038 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:31.038 Waiting for block devices as requested 00:21:31.296 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.296 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:31.296 00:21:31.296 real 0m26.744s 00:21:31.296 user 0m49.867s 00:21:31.296 sys 0m8.098s 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:31.296 ************************************ 00:21:31.296 END TEST nvmf_abort_qd_sizes 00:21:31.296 ************************************ 00:21:31.296 18:42:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:31.296 18:42:44 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:31.296 18:42:44 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:31.296 18:42:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:31.296 18:42:44 -- common/autotest_common.sh@10 -- # set +x 00:21:31.555 ************************************ 00:21:31.555 START TEST keyring_file 00:21:31.555 ************************************ 00:21:31.555 18:42:44 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:31.555 * Looking for test storage... 00:21:31.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:31.555 18:42:44 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8b07fcc8-e6b3-4152-8362-9695ab742add 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=8b07fcc8-e6b3-4152-8362-9695ab742add 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.555 18:42:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.555 18:42:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.555 18:42:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.555 18:42:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.555 18:42:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.555 18:42:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.555 18:42:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:31.555 18:42:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:31.555 18:42:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:31.555 18:42:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:31.555 18:42:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:31.555 18:42:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:31.555 18:42:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:31.555 18:42:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xpLRFEBjEc 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:31.555 18:42:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xpLRFEBjEc 00:21:31.555 18:42:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xpLRFEBjEc 00:21:31.556 18:42:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xpLRFEBjEc 00:21:31.556 18:42:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:31.556 18:42:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:31.556 18:42:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:31.556 18:42:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:31.556 18:42:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:31.556 18:42:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:31.556 18:42:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XlPLGLuCfw 00:21:31.556 18:42:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:31.556 18:42:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:31.556 18:42:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:31.556 18:42:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:31.556 18:42:44 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:31.556 18:42:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:31.556 18:42:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:31.556 18:42:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XlPLGLuCfw 00:21:31.556 18:42:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XlPLGLuCfw 00:21:31.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.556 18:42:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.XlPLGLuCfw 00:21:31.556 18:42:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=85292 00:21:31.556 18:42:45 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:31.556 18:42:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85292 00:21:31.556 18:42:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 85292 ']' 00:21:31.556 18:42:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.556 18:42:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.556 18:42:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.556 18:42:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.556 18:42:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:31.814 [2024-05-16 18:42:45.080363] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:31.814 [2024-05-16 18:42:45.080631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85292 ] 00:21:31.814 [2024-05-16 18:42:45.218995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.073 [2024-05-16 18:42:45.352455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.073 [2024-05-16 18:42:45.426988] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:21:32.642 18:42:46 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:32.642 [2024-05-16 18:42:46.054111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.642 null0 00:21:32.642 [2024-05-16 18:42:46.086048] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:32.642 [2024-05-16 18:42:46.086120] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.642 [2024-05-16 18:42:46.086347] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:32.642 [2024-05-16 18:42:46.094071] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.642 18:42:46 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:32.642 [2024-05-16 18:42:46.106072] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:32.642 request: 00:21:32.642 { 00:21:32.642 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:32.642 "secure_channel": false, 00:21:32.642 "listen_address": { 00:21:32.642 "trtype": "tcp", 00:21:32.642 "traddr": "127.0.0.1", 00:21:32.642 "trsvcid": "4420" 00:21:32.642 }, 00:21:32.642 "method": "nvmf_subsystem_add_listener", 00:21:32.642 "req_id": 1 00:21:32.642 } 00:21:32.642 Got JSON-RPC error response 00:21:32.642 response: 00:21:32.642 { 00:21:32.642 "code": -32602, 00:21:32.642 "message": "Invalid parameters" 00:21:32.642 } 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:32.642 18:42:46 keyring_file -- keyring/file.sh@46 -- # bperfpid=85309 00:21:32.642 18:42:46 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:32.642 18:42:46 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85309 /var/tmp/bperf.sock 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 85309 ']' 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:32.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:32.642 18:42:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:32.901 [2024-05-16 18:42:46.168854] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:32.901 [2024-05-16 18:42:46.169115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85309 ] 00:21:32.901 [2024-05-16 18:42:46.308571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.159 [2024-05-16 18:42:46.426128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.160 [2024-05-16 18:42:46.501611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:33.726 18:42:47 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:33.726 18:42:47 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:21:33.726 18:42:47 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:33.726 18:42:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:33.984 18:42:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XlPLGLuCfw 00:21:33.984 18:42:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XlPLGLuCfw 00:21:34.242 18:42:47 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:34.242 18:42:47 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:34.242 18:42:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.242 18:42:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.242 18:42:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:34.501 18:42:47 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.xpLRFEBjEc == \/\t\m\p\/\t\m\p\.\x\p\L\R\F\E\B\j\E\c ]] 00:21:34.501 18:42:47 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:34.501 18:42:47 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:34.501 18:42:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.501 18:42:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:34.501 18:42:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:34.759 18:42:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XlPLGLuCfw == \/\t\m\p\/\t\m\p\.\X\l\P\L\G\L\u\C\f\w ]] 00:21:34.759 18:42:48 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:34.759 18:42:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:34.759 18:42:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:34.759 18:42:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:34.759 18:42:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:34.759 18:42:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.020 18:42:48 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:35.020 18:42:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:35.020 18:42:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.020 18:42:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:35.020 18:42:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.020 18:42:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:35.020 18:42:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.279 18:42:48 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:35.279 18:42:48 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.279 18:42:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:35.538 [2024-05-16 18:42:48.810464] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.538 nvme0n1 00:21:35.538 18:42:48 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:35.538 18:42:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:35.538 18:42:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.538 18:42:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.538 18:42:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:35.538 18:42:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:35.797 18:42:49 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:35.797 18:42:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:35.797 18:42:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:35.797 18:42:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:35.797 18:42:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:35.797 18:42:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:35.797 18:42:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:36.055 18:42:49 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:36.055 18:42:49 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:36.055 Running I/O for 1 seconds... 00:21:36.991 00:21:36.991 Latency(us) 00:21:36.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.991 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:36.991 nvme0n1 : 1.01 13292.92 51.93 0.00 0.00 9599.05 4974.78 19541.64 00:21:36.991 =================================================================================================================== 00:21:36.991 Total : 13292.92 51.93 0.00 0.00 9599.05 4974.78 19541.64 00:21:36.991 0 00:21:36.991 18:42:50 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:36.991 18:42:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:37.559 18:42:50 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:37.559 18:42:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:37.559 18:42:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.559 18:42:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.559 18:42:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.559 18:42:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:37.559 18:42:51 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:37.559 18:42:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:37.559 18:42:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:37.559 18:42:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:37.559 18:42:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:37.559 18:42:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:37.559 18:42:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:37.818 18:42:51 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:37.818 18:42:51 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:37.818 18:42:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:37.818 18:42:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:37.818 18:42:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:37.818 18:42:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.818 18:42:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:37.818 18:42:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:37.818 18:42:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:37.818 18:42:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:38.077 [2024-05-16 18:42:51.492470] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:38.077 [2024-05-16 18:42:51.493153] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cf750 (107): Transport endpoint is not connected 00:21:38.077 [2024-05-16 18:42:51.494140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12cf750 (9): Bad file descriptor 00:21:38.077 [2024-05-16 18:42:51.495137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:38.077 [2024-05-16 18:42:51.495161] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:38.077 [2024-05-16 18:42:51.495172] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:38.077 request: 00:21:38.077 { 00:21:38.077 "name": "nvme0", 00:21:38.077 "trtype": "tcp", 00:21:38.077 "traddr": "127.0.0.1", 00:21:38.077 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:38.077 "adrfam": "ipv4", 00:21:38.077 "trsvcid": "4420", 00:21:38.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:38.077 "psk": "key1", 00:21:38.077 "method": "bdev_nvme_attach_controller", 00:21:38.077 "req_id": 1 00:21:38.077 } 00:21:38.077 Got JSON-RPC error response 00:21:38.077 response: 00:21:38.077 { 00:21:38.078 "code": -32602, 00:21:38.078 "message": "Invalid parameters" 00:21:38.078 } 00:21:38.078 18:42:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:38.078 18:42:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:38.078 18:42:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:38.078 18:42:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:38.078 18:42:51 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:38.078 18:42:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.078 18:42:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:38.078 18:42:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:38.078 18:42:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.078 18:42:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.338 18:42:51 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:38.338 18:42:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:38.338 18:42:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:38.338 18:42:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:38.338 18:42:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:38.338 18:42:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:38.338 18:42:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:38.596 18:42:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:38.596 18:42:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:38.596 18:42:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:38.855 18:42:52 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:38.855 18:42:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:39.114 18:42:52 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:39.114 18:42:52 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:39.114 18:42:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.373 18:42:52 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:39.373 18:42:52 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xpLRFEBjEc 00:21:39.373 18:42:52 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:39.373 18:42:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:39.373 18:42:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:39.373 18:42:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:39.373 18:42:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.373 18:42:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:39.373 18:42:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:39.373 18:42:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:39.373 18:42:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:39.631 [2024-05-16 18:42:52.989741] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xpLRFEBjEc': 0100660 00:21:39.631 [2024-05-16 18:42:52.989785] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:39.631 request: 00:21:39.631 { 00:21:39.631 "name": "key0", 00:21:39.631 "path": "/tmp/tmp.xpLRFEBjEc", 00:21:39.631 "method": "keyring_file_add_key", 00:21:39.631 "req_id": 1 00:21:39.631 } 00:21:39.631 Got JSON-RPC error response 00:21:39.631 response: 00:21:39.631 { 00:21:39.631 "code": -1, 00:21:39.631 "message": "Operation not permitted" 00:21:39.631 } 00:21:39.631 18:42:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:39.631 18:42:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:39.631 18:42:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:39.631 18:42:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:39.631 18:42:53 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xpLRFEBjEc 00:21:39.631 18:42:53 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:39.631 18:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xpLRFEBjEc 00:21:39.890 18:42:53 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xpLRFEBjEc 00:21:39.890 18:42:53 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:39.890 18:42:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:39.890 18:42:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:39.890 18:42:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:39.890 18:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:39.890 18:42:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:40.148 18:42:53 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:40.148 18:42:53 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.148 18:42:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:40.149 18:42:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.149 18:42:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:40.149 18:42:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.149 18:42:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:40.149 18:42:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:40.149 18:42:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.149 18:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.407 [2024-05-16 18:42:53.769931] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xpLRFEBjEc': No such file or directory 00:21:40.407 [2024-05-16 18:42:53.769979] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:40.407 [2024-05-16 18:42:53.770020] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:40.407 [2024-05-16 18:42:53.770028] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:40.407 [2024-05-16 18:42:53.770036] bdev_nvme.c:6263:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:40.407 request: 00:21:40.407 { 00:21:40.407 "name": "nvme0", 00:21:40.407 "trtype": "tcp", 00:21:40.407 "traddr": "127.0.0.1", 00:21:40.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:40.407 "adrfam": "ipv4", 00:21:40.407 "trsvcid": "4420", 00:21:40.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:40.407 "psk": "key0", 00:21:40.407 "method": "bdev_nvme_attach_controller", 00:21:40.407 "req_id": 1 00:21:40.407 } 00:21:40.407 Got JSON-RPC error response 00:21:40.407 response: 00:21:40.407 { 00:21:40.407 "code": -19, 00:21:40.407 "message": "No such device" 00:21:40.407 } 00:21:40.407 18:42:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:40.407 18:42:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:40.407 18:42:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:40.407 18:42:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:40.407 18:42:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:40.407 18:42:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:40.665 18:42:54 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.I0BUx2VPA8 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:40.665 18:42:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:40.665 18:42:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:40.665 18:42:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:40.665 18:42:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:40.665 18:42:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:40.665 18:42:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.I0BUx2VPA8 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.I0BUx2VPA8 00:21:40.665 18:42:54 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.I0BUx2VPA8 00:21:40.665 18:42:54 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0BUx2VPA8 00:21:40.665 18:42:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I0BUx2VPA8 00:21:40.923 18:42:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:40.924 18:42:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:41.182 nvme0n1 00:21:41.182 18:42:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:41.182 18:42:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:41.182 18:42:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:41.182 18:42:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.182 18:42:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:41.182 18:42:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.441 18:42:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:41.441 18:42:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:41.441 18:42:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:41.700 18:42:55 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:41.700 18:42:55 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:41.700 18:42:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.700 18:42:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:41.700 18:42:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:41.958 18:42:55 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:41.958 18:42:55 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:41.958 18:42:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:41.958 18:42:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:41.958 18:42:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:41.959 18:42:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:41.959 18:42:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.217 18:42:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:42.217 18:42:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:42.217 18:42:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:42.475 18:42:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:42.475 18:42:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:42.475 18:42:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:42.733 18:42:56 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:42.733 18:42:56 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.I0BUx2VPA8 00:21:42.733 18:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.I0BUx2VPA8 00:21:42.991 18:42:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XlPLGLuCfw 00:21:42.991 18:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XlPLGLuCfw 00:21:43.250 18:42:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:43.250 18:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:43.508 nvme0n1 00:21:43.508 18:42:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:43.508 18:42:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:44.075 18:42:57 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:44.075 "subsystems": [ 00:21:44.075 { 00:21:44.075 "subsystem": "keyring", 00:21:44.075 "config": [ 00:21:44.075 { 00:21:44.075 "method": "keyring_file_add_key", 00:21:44.075 "params": { 00:21:44.075 "name": "key0", 00:21:44.075 "path": "/tmp/tmp.I0BUx2VPA8" 00:21:44.075 } 00:21:44.075 }, 00:21:44.075 { 00:21:44.076 "method": "keyring_file_add_key", 00:21:44.076 "params": { 00:21:44.076 "name": "key1", 00:21:44.076 "path": "/tmp/tmp.XlPLGLuCfw" 00:21:44.076 } 00:21:44.076 } 00:21:44.076 ] 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "subsystem": "iobuf", 00:21:44.076 "config": [ 00:21:44.076 { 00:21:44.076 "method": "iobuf_set_options", 00:21:44.076 "params": { 00:21:44.076 "small_pool_count": 8192, 00:21:44.076 "large_pool_count": 1024, 00:21:44.076 "small_bufsize": 8192, 00:21:44.076 "large_bufsize": 135168 00:21:44.076 } 00:21:44.076 } 00:21:44.076 ] 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "subsystem": "sock", 00:21:44.076 "config": [ 00:21:44.076 { 00:21:44.076 "method": "sock_set_default_impl", 00:21:44.076 "params": { 00:21:44.076 "impl_name": "uring" 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "sock_impl_set_options", 00:21:44.076 "params": { 00:21:44.076 "impl_name": "ssl", 00:21:44.076 "recv_buf_size": 4096, 00:21:44.076 "send_buf_size": 4096, 00:21:44.076 "enable_recv_pipe": true, 00:21:44.076 "enable_quickack": false, 00:21:44.076 "enable_placement_id": 0, 00:21:44.076 "enable_zerocopy_send_server": true, 00:21:44.076 "enable_zerocopy_send_client": false, 00:21:44.076 "zerocopy_threshold": 0, 00:21:44.076 "tls_version": 0, 00:21:44.076 "enable_ktls": false 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "sock_impl_set_options", 00:21:44.076 "params": { 00:21:44.076 "impl_name": "posix", 00:21:44.076 "recv_buf_size": 2097152, 00:21:44.076 "send_buf_size": 2097152, 00:21:44.076 "enable_recv_pipe": true, 00:21:44.076 "enable_quickack": false, 00:21:44.076 "enable_placement_id": 0, 00:21:44.076 "enable_zerocopy_send_server": true, 00:21:44.076 "enable_zerocopy_send_client": false, 00:21:44.076 "zerocopy_threshold": 0, 00:21:44.076 "tls_version": 0, 00:21:44.076 "enable_ktls": false 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "sock_impl_set_options", 00:21:44.076 "params": { 00:21:44.076 "impl_name": "uring", 00:21:44.076 "recv_buf_size": 2097152, 00:21:44.076 "send_buf_size": 2097152, 00:21:44.076 "enable_recv_pipe": true, 00:21:44.076 "enable_quickack": false, 00:21:44.076 "enable_placement_id": 0, 00:21:44.076 "enable_zerocopy_send_server": false, 00:21:44.076 "enable_zerocopy_send_client": false, 00:21:44.076 "zerocopy_threshold": 0, 00:21:44.076 "tls_version": 0, 00:21:44.076 "enable_ktls": false 00:21:44.076 } 00:21:44.076 } 00:21:44.076 ] 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "subsystem": "vmd", 00:21:44.076 "config": [] 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "subsystem": "accel", 00:21:44.076 "config": [ 00:21:44.076 { 00:21:44.076 "method": "accel_set_options", 00:21:44.076 "params": { 00:21:44.076 "small_cache_size": 128, 00:21:44.076 "large_cache_size": 16, 00:21:44.076 "task_count": 2048, 00:21:44.076 "sequence_count": 2048, 00:21:44.076 "buf_count": 2048 00:21:44.076 } 00:21:44.076 } 00:21:44.076 ] 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "subsystem": "bdev", 00:21:44.076 "config": [ 00:21:44.076 { 00:21:44.076 "method": "bdev_set_options", 00:21:44.076 "params": { 00:21:44.076 "bdev_io_pool_size": 65535, 00:21:44.076 "bdev_io_cache_size": 256, 00:21:44.076 "bdev_auto_examine": true, 00:21:44.076 "iobuf_small_cache_size": 128, 00:21:44.076 "iobuf_large_cache_size": 16 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "bdev_raid_set_options", 00:21:44.076 "params": { 00:21:44.076 "process_window_size_kb": 1024 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "bdev_iscsi_set_options", 00:21:44.076 "params": { 00:21:44.076 "timeout_sec": 30 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "bdev_nvme_set_options", 00:21:44.076 "params": { 00:21:44.076 "action_on_timeout": "none", 00:21:44.076 "timeout_us": 0, 00:21:44.076 "timeout_admin_us": 0, 00:21:44.076 "keep_alive_timeout_ms": 10000, 00:21:44.076 "arbitration_burst": 0, 00:21:44.076 "low_priority_weight": 0, 00:21:44.076 "medium_priority_weight": 0, 00:21:44.076 "high_priority_weight": 0, 00:21:44.076 "nvme_adminq_poll_period_us": 10000, 00:21:44.076 "nvme_ioq_poll_period_us": 0, 00:21:44.076 "io_queue_requests": 512, 00:21:44.076 "delay_cmd_submit": true, 00:21:44.076 "transport_retry_count": 4, 00:21:44.076 "bdev_retry_count": 3, 00:21:44.076 "transport_ack_timeout": 0, 00:21:44.076 "ctrlr_loss_timeout_sec": 0, 00:21:44.076 "reconnect_delay_sec": 0, 00:21:44.076 "fast_io_fail_timeout_sec": 0, 00:21:44.076 "disable_auto_failback": false, 00:21:44.076 "generate_uuids": false, 00:21:44.076 "transport_tos": 0, 00:21:44.076 "nvme_error_stat": false, 00:21:44.076 "rdma_srq_size": 0, 00:21:44.076 "io_path_stat": false, 00:21:44.076 "allow_accel_sequence": false, 00:21:44.076 "rdma_max_cq_size": 0, 00:21:44.076 "rdma_cm_event_timeout_ms": 0, 00:21:44.076 "dhchap_digests": [ 00:21:44.076 "sha256", 00:21:44.076 "sha384", 00:21:44.076 "sha512" 00:21:44.076 ], 00:21:44.076 "dhchap_dhgroups": [ 00:21:44.076 "null", 00:21:44.076 "ffdhe2048", 00:21:44.076 "ffdhe3072", 00:21:44.076 "ffdhe4096", 00:21:44.076 "ffdhe6144", 00:21:44.076 "ffdhe8192" 00:21:44.076 ] 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "bdev_nvme_attach_controller", 00:21:44.076 "params": { 00:21:44.076 "name": "nvme0", 00:21:44.076 "trtype": "TCP", 00:21:44.076 "adrfam": "IPv4", 00:21:44.076 "traddr": "127.0.0.1", 00:21:44.076 "trsvcid": "4420", 00:21:44.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.076 "prchk_reftag": false, 00:21:44.076 "prchk_guard": false, 00:21:44.076 "ctrlr_loss_timeout_sec": 0, 00:21:44.076 "reconnect_delay_sec": 0, 00:21:44.076 "fast_io_fail_timeout_sec": 0, 00:21:44.076 "psk": "key0", 00:21:44.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:44.076 "hdgst": false, 00:21:44.076 "ddgst": false 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "bdev_nvme_set_hotplug", 00:21:44.076 "params": { 00:21:44.076 "period_us": 100000, 00:21:44.076 "enable": false 00:21:44.076 } 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "method": "bdev_wait_for_examine" 00:21:44.076 } 00:21:44.076 ] 00:21:44.076 }, 00:21:44.076 { 00:21:44.076 "subsystem": "nbd", 00:21:44.076 "config": [] 00:21:44.076 } 00:21:44.076 ] 00:21:44.076 }' 00:21:44.076 18:42:57 keyring_file -- keyring/file.sh@114 -- # killprocess 85309 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 85309 ']' 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@950 -- # kill -0 85309 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@951 -- # uname 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85309 00:21:44.076 killing process with pid 85309 00:21:44.076 Received shutdown signal, test time was about 1.000000 seconds 00:21:44.076 00:21:44.076 Latency(us) 00:21:44.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.076 =================================================================================================================== 00:21:44.076 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85309' 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@965 -- # kill 85309 00:21:44.076 18:42:57 keyring_file -- common/autotest_common.sh@970 -- # wait 85309 00:21:44.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:44.336 18:42:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=85553 00:21:44.336 18:42:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85553 /var/tmp/bperf.sock 00:21:44.336 18:42:57 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 85553 ']' 00:21:44.336 18:42:57 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:44.336 18:42:57 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:44.336 18:42:57 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:44.336 18:42:57 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:44.336 18:42:57 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:44.336 18:42:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:44.336 "subsystems": [ 00:21:44.336 { 00:21:44.336 "subsystem": "keyring", 00:21:44.336 "config": [ 00:21:44.336 { 00:21:44.336 "method": "keyring_file_add_key", 00:21:44.336 "params": { 00:21:44.336 "name": "key0", 00:21:44.336 "path": "/tmp/tmp.I0BUx2VPA8" 00:21:44.336 } 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "method": "keyring_file_add_key", 00:21:44.336 "params": { 00:21:44.336 "name": "key1", 00:21:44.336 "path": "/tmp/tmp.XlPLGLuCfw" 00:21:44.336 } 00:21:44.336 } 00:21:44.336 ] 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "subsystem": "iobuf", 00:21:44.336 "config": [ 00:21:44.336 { 00:21:44.336 "method": "iobuf_set_options", 00:21:44.336 "params": { 00:21:44.336 "small_pool_count": 8192, 00:21:44.336 "large_pool_count": 1024, 00:21:44.336 "small_bufsize": 8192, 00:21:44.336 "large_bufsize": 135168 00:21:44.336 } 00:21:44.336 } 00:21:44.336 ] 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "subsystem": "sock", 00:21:44.336 "config": [ 00:21:44.336 { 00:21:44.336 "method": "sock_set_default_impl", 00:21:44.336 "params": { 00:21:44.336 "impl_name": "uring" 00:21:44.336 } 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "method": "sock_impl_set_options", 00:21:44.336 "params": { 00:21:44.336 "impl_name": "ssl", 00:21:44.336 "recv_buf_size": 4096, 00:21:44.336 "send_buf_size": 4096, 00:21:44.336 "enable_recv_pipe": true, 00:21:44.336 "enable_quickack": false, 00:21:44.336 "enable_placement_id": 0, 00:21:44.336 "enable_zerocopy_send_server": true, 00:21:44.336 "enable_zerocopy_send_client": false, 00:21:44.336 "zerocopy_threshold": 0, 00:21:44.336 "tls_version": 0, 00:21:44.336 "enable_ktls": false 00:21:44.336 } 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "method": "sock_impl_set_options", 00:21:44.336 "params": { 00:21:44.336 "impl_name": "posix", 00:21:44.336 "recv_buf_size": 2097152, 00:21:44.336 "send_buf_size": 2097152, 00:21:44.336 "enable_recv_pipe": true, 00:21:44.336 "enable_quickack": false, 00:21:44.336 "enable_placement_id": 0, 00:21:44.336 "enable_zerocopy_send_server": true, 00:21:44.336 "enable_zerocopy_send_client": false, 00:21:44.336 "zerocopy_threshold": 0, 00:21:44.336 "tls_version": 0, 00:21:44.336 "enable_ktls": false 00:21:44.336 } 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "method": "sock_impl_set_options", 00:21:44.336 "params": { 00:21:44.336 "impl_name": "uring", 00:21:44.336 "recv_buf_size": 2097152, 00:21:44.336 "send_buf_size": 2097152, 00:21:44.336 "enable_recv_pipe": true, 00:21:44.336 "enable_quickack": false, 00:21:44.336 "enable_placement_id": 0, 00:21:44.336 "enable_zerocopy_send_server": false, 00:21:44.336 "enable_zerocopy_send_client": false, 00:21:44.336 "zerocopy_threshold": 0, 00:21:44.336 "tls_version": 0, 00:21:44.336 "enable_ktls": false 00:21:44.336 } 00:21:44.336 } 00:21:44.336 ] 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "subsystem": "vmd", 00:21:44.336 "config": [] 00:21:44.336 }, 00:21:44.336 { 00:21:44.336 "subsystem": "accel", 00:21:44.336 "config": [ 00:21:44.336 { 00:21:44.337 "method": "accel_set_options", 00:21:44.337 "params": { 00:21:44.337 "small_cache_size": 128, 00:21:44.337 "large_cache_size": 16, 00:21:44.337 "task_count": 2048, 00:21:44.337 "sequence_count": 2048, 00:21:44.337 "buf_count": 2048 00:21:44.337 } 00:21:44.337 } 00:21:44.337 ] 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "subsystem": "bdev", 00:21:44.337 "config": [ 00:21:44.337 { 00:21:44.337 "method": "bdev_set_options", 00:21:44.337 "params": { 00:21:44.337 "bdev_io_pool_size": 65535, 00:21:44.337 "bdev_io_cache_size": 256, 00:21:44.337 "bdev_auto_examine": true, 00:21:44.337 "iobuf_small_cache_size": 128, 00:21:44.337 "iobuf_large_cache_size": 16 00:21:44.337 } 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "method": "bdev_raid_set_options", 00:21:44.337 "params": { 00:21:44.337 "process_window_size_kb": 1024 00:21:44.337 } 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "method": "bdev_iscsi_set_options", 00:21:44.337 "params": { 00:21:44.337 "timeout_sec": 30 00:21:44.337 } 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "method": "bdev_nvme_set_options", 00:21:44.337 "params": { 00:21:44.337 "action_on_timeout": "none", 00:21:44.337 "timeout_us": 0, 00:21:44.337 "timeout_admin_us": 0, 00:21:44.337 "keep_alive_timeout_ms": 10000, 00:21:44.337 "arbitration_burst": 0, 00:21:44.337 "low_priority_weight": 0, 00:21:44.337 "medium_priority_weight": 0, 00:21:44.337 "high_priority_weight": 0, 00:21:44.337 "nvme_adminq_poll_period_us": 10000, 00:21:44.337 "nvme_ioq_poll_period_us": 0, 00:21:44.337 "io_queue_requests": 512, 00:21:44.337 "delay_cmd_submit": true, 00:21:44.337 "transport_retry_count": 4, 00:21:44.337 "bdev_retry_count": 3, 00:21:44.337 "transport_ack_timeout": 0, 00:21:44.337 "ctrlr_loss_timeout_sec": 0, 00:21:44.337 "reconnect_delay_sec": 0, 00:21:44.337 "fast_io_fail_timeout_sec": 0, 00:21:44.337 "disable_auto_failback": false, 00:21:44.337 "generate_uuids": false, 00:21:44.337 "transport_tos": 0, 00:21:44.337 "nvme_error_stat": false, 00:21:44.337 "rdma_srq_size": 0, 00:21:44.337 "io_path_stat": false, 00:21:44.337 "allow_accel_sequence": false, 00:21:44.337 "rdma_max_cq_size": 0, 00:21:44.337 "rdma_cm_event_timeout_ms": 0, 00:21:44.337 "dhchap_digests": [ 00:21:44.337 "sha256", 00:21:44.337 "sha384", 00:21:44.337 "sha512" 00:21:44.337 ], 00:21:44.337 "dhchap_dhgroups": [ 00:21:44.337 "null", 00:21:44.337 "ffdhe2048", 00:21:44.337 "ffdhe3072", 00:21:44.337 "ffdhe4096", 00:21:44.337 "ffdhe6144", 00:21:44.337 "ffdhe8192" 00:21:44.337 ] 00:21:44.337 } 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "method": "bdev_nvme_attach_controller", 00:21:44.337 "params": { 00:21:44.337 "name": "nvme0", 00:21:44.337 "trtype": "TCP", 00:21:44.337 "adrfam": "IPv4", 00:21:44.337 "traddr": "127.0.0.1", 00:21:44.337 "trsvcid": "4420", 00:21:44.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.337 "prchk_reftag": false, 00:21:44.337 "prchk_guard": false, 00:21:44.337 "ctrlr_loss_timeout_sec": 0, 00:21:44.337 "reconnect_delay_sec": 0, 00:21:44.337 "fast_io_fail_timeout_sec": 0, 00:21:44.337 "psk": "key0", 00:21:44.337 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:44.337 "hdgst": false, 00:21:44.337 "ddgst": false 00:21:44.337 } 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "method": "bdev_nvme_set_hotplug", 00:21:44.337 "params": { 00:21:44.337 "period_us": 100000, 00:21:44.337 "enable": false 00:21:44.337 } 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "method": "bdev_wait_for_examine" 00:21:44.337 } 00:21:44.337 ] 00:21:44.337 }, 00:21:44.337 { 00:21:44.337 "subsystem": "nbd", 00:21:44.337 "config": [] 00:21:44.337 } 00:21:44.337 ] 00:21:44.337 }' 00:21:44.337 18:42:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:44.337 [2024-05-16 18:42:57.665189] Starting SPDK v24.09-pre git sha1 cf8ec7cfe / DPDK 24.03.0 initialization... 00:21:44.337 [2024-05-16 18:42:57.666400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85553 ] 00:21:44.337 [2024-05-16 18:42:57.804868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.596 [2024-05-16 18:42:57.928116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.596 [2024-05-16 18:42:58.084302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:44.854 [2024-05-16 18:42:58.150352] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.112 18:42:58 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:45.112 18:42:58 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:21:45.112 18:42:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:45.112 18:42:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.112 18:42:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:45.370 18:42:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:45.370 18:42:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:45.370 18:42:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:45.370 18:42:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:45.370 18:42:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.370 18:42:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:45.370 18:42:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.938 18:42:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:45.938 18:42:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:45.938 18:42:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:45.938 18:42:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:45.938 18:42:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.938 18:42:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.938 18:42:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:45.938 18:42:59 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:45.938 18:42:59 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:45.938 18:42:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:45.938 18:42:59 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:46.196 18:42:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:46.196 18:42:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:46.196 18:42:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.I0BUx2VPA8 /tmp/tmp.XlPLGLuCfw 00:21:46.196 18:42:59 keyring_file -- keyring/file.sh@20 -- # killprocess 85553 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 85553 ']' 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@950 -- # kill -0 85553 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@951 -- # uname 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85553 00:21:46.196 killing process with pid 85553 00:21:46.196 Received shutdown signal, test time was about 1.000000 seconds 00:21:46.196 00:21:46.196 Latency(us) 00:21:46.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.196 =================================================================================================================== 00:21:46.196 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85553' 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@965 -- # kill 85553 00:21:46.196 18:42:59 keyring_file -- common/autotest_common.sh@970 -- # wait 85553 00:21:46.763 18:42:59 keyring_file -- keyring/file.sh@21 -- # killprocess 85292 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 85292 ']' 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@950 -- # kill -0 85292 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@951 -- # uname 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85292 00:21:46.763 killing process with pid 85292 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85292' 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@965 -- # kill 85292 00:21:46.763 [2024-05-16 18:42:59.989217] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:46.763 [2024-05-16 18:42:59.989325] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:46.763 18:42:59 keyring_file -- common/autotest_common.sh@970 -- # wait 85292 00:21:47.036 00:21:47.036 real 0m15.715s 00:21:47.036 user 0m38.501s 00:21:47.036 sys 0m3.243s 00:21:47.036 18:43:00 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:47.036 18:43:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:47.036 ************************************ 00:21:47.036 END TEST keyring_file 00:21:47.036 ************************************ 00:21:47.317 18:43:00 -- spdk/autotest.sh@296 -- # [[ n == y ]] 00:21:47.317 18:43:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:21:47.317 18:43:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:21:47.317 18:43:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:21:47.317 18:43:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:21:47.317 18:43:00 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:21:47.317 18:43:00 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:21:47.317 18:43:00 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:21:47.317 18:43:00 -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:47.317 18:43:00 -- common/autotest_common.sh@10 -- # set +x 00:21:47.317 18:43:00 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:21:47.317 18:43:00 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:21:47.317 18:43:00 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:21:47.317 18:43:00 -- common/autotest_common.sh@10 -- # set +x 00:21:48.695 INFO: APP EXITING 00:21:48.695 INFO: killing all VMs 00:21:48.695 INFO: killing vhost app 00:21:48.695 INFO: EXIT DONE 00:21:49.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:49.631 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:49.631 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:50.199 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:50.199 Cleaning 00:21:50.199 Removing: /var/run/dpdk/spdk0/config 00:21:50.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:50.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:50.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:50.199 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:50.199 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:50.199 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:50.199 Removing: /var/run/dpdk/spdk1/config 00:21:50.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:50.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:50.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:50.199 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:50.199 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:50.199 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:50.199 Removing: /var/run/dpdk/spdk2/config 00:21:50.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:50.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:50.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:50.199 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:50.199 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:50.199 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:50.199 Removing: /var/run/dpdk/spdk3/config 00:21:50.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:50.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:50.199 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:50.457 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:50.457 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:50.457 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:50.457 Removing: /var/run/dpdk/spdk4/config 00:21:50.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:50.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:50.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:50.457 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:50.457 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:50.457 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:50.457 Removing: /dev/shm/nvmf_trace.0 00:21:50.457 Removing: /dev/shm/spdk_tgt_trace.pid58615 00:21:50.457 Removing: /var/run/dpdk/spdk0 00:21:50.457 Removing: /var/run/dpdk/spdk1 00:21:50.457 Removing: /var/run/dpdk/spdk2 00:21:50.457 Removing: /var/run/dpdk/spdk3 00:21:50.457 Removing: /var/run/dpdk/spdk4 00:21:50.457 Removing: /var/run/dpdk/spdk_pid58470 00:21:50.457 Removing: /var/run/dpdk/spdk_pid58615 00:21:50.457 Removing: /var/run/dpdk/spdk_pid58813 00:21:50.457 Removing: /var/run/dpdk/spdk_pid58894 00:21:50.457 Removing: /var/run/dpdk/spdk_pid58927 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59031 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59049 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59167 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59363 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59504 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59568 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59644 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59735 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59807 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59845 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59881 00:21:50.457 Removing: /var/run/dpdk/spdk_pid59942 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60025 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60458 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60510 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60561 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60577 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60644 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60660 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60727 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60743 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60789 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60811 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60852 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60870 00:21:50.457 Removing: /var/run/dpdk/spdk_pid60993 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61028 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61103 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61154 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61179 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61243 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61277 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61312 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61344 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61381 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61410 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61450 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61483 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61519 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61552 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61588 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61628 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61657 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61697 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61728 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61768 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61803 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61840 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61878 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61912 00:21:50.457 Removing: /var/run/dpdk/spdk_pid61948 00:21:50.457 Removing: /var/run/dpdk/spdk_pid62018 00:21:50.457 Removing: /var/run/dpdk/spdk_pid62111 00:21:50.457 Removing: /var/run/dpdk/spdk_pid62419 00:21:50.458 Removing: /var/run/dpdk/spdk_pid62431 00:21:50.458 Removing: /var/run/dpdk/spdk_pid62466 00:21:50.458 Removing: /var/run/dpdk/spdk_pid62481 00:21:50.458 Removing: /var/run/dpdk/spdk_pid62502 00:21:50.458 Removing: /var/run/dpdk/spdk_pid62521 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62540 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62550 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62580 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62588 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62609 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62628 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62647 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62668 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62687 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62706 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62716 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62742 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62754 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62775 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62811 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62825 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62854 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62918 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62947 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62956 00:21:50.716 Removing: /var/run/dpdk/spdk_pid62990 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63001 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63008 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63056 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63070 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63098 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63108 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63123 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63132 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63146 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63157 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63172 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63181 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63210 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63242 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63257 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63291 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63295 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63311 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63359 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63376 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63407 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63410 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63423 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63436 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63449 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63451 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63464 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63477 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63551 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63605 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63710 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63749 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63794 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63814 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63825 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63845 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63882 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63903 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63973 00:21:50.716 Removing: /var/run/dpdk/spdk_pid63989 00:21:50.716 Removing: /var/run/dpdk/spdk_pid64033 00:21:50.716 Removing: /var/run/dpdk/spdk_pid64099 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64174 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64203 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64289 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64337 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64375 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64588 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64691 00:21:50.717 Removing: /var/run/dpdk/spdk_pid64714 00:21:50.717 Removing: /var/run/dpdk/spdk_pid65031 00:21:50.717 Removing: /var/run/dpdk/spdk_pid65066 00:21:50.717 Removing: /var/run/dpdk/spdk_pid65356 00:21:50.717 Removing: /var/run/dpdk/spdk_pid65767 00:21:50.717 Removing: /var/run/dpdk/spdk_pid66047 00:21:50.717 Removing: /var/run/dpdk/spdk_pid66845 00:21:50.717 Removing: /var/run/dpdk/spdk_pid67668 00:21:50.717 Removing: /var/run/dpdk/spdk_pid67779 00:21:50.717 Removing: /var/run/dpdk/spdk_pid67852 00:21:50.717 Removing: /var/run/dpdk/spdk_pid69121 00:21:50.717 Removing: /var/run/dpdk/spdk_pid69328 00:21:50.717 Removing: /var/run/dpdk/spdk_pid72705 00:21:50.717 Removing: /var/run/dpdk/spdk_pid73019 00:21:50.717 Removing: /var/run/dpdk/spdk_pid73128 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73262 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73285 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73318 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73340 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73438 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73574 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73729 00:21:50.975 Removing: /var/run/dpdk/spdk_pid73811 00:21:50.975 Removing: /var/run/dpdk/spdk_pid74004 00:21:50.975 Removing: /var/run/dpdk/spdk_pid74093 00:21:50.975 Removing: /var/run/dpdk/spdk_pid74186 00:21:50.975 Removing: /var/run/dpdk/spdk_pid74489 00:21:50.975 Removing: /var/run/dpdk/spdk_pid74871 00:21:50.975 Removing: /var/run/dpdk/spdk_pid74880 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75151 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75171 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75185 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75210 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75221 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75500 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75543 00:21:50.975 Removing: /var/run/dpdk/spdk_pid75828 00:21:50.975 Removing: /var/run/dpdk/spdk_pid76024 00:21:50.975 Removing: /var/run/dpdk/spdk_pid76409 00:21:50.975 Removing: /var/run/dpdk/spdk_pid76912 00:21:50.975 Removing: /var/run/dpdk/spdk_pid77728 00:21:50.975 Removing: /var/run/dpdk/spdk_pid78320 00:21:50.975 Removing: /var/run/dpdk/spdk_pid78329 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80238 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80298 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80357 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80410 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80531 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80587 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80653 00:21:50.975 Removing: /var/run/dpdk/spdk_pid80708 00:21:50.975 Removing: /var/run/dpdk/spdk_pid81031 00:21:50.975 Removing: /var/run/dpdk/spdk_pid82195 00:21:50.975 Removing: /var/run/dpdk/spdk_pid82335 00:21:50.975 Removing: /var/run/dpdk/spdk_pid82580 00:21:50.976 Removing: /var/run/dpdk/spdk_pid83133 00:21:50.976 Removing: /var/run/dpdk/spdk_pid83293 00:21:50.976 Removing: /var/run/dpdk/spdk_pid83450 00:21:50.976 Removing: /var/run/dpdk/spdk_pid83547 00:21:50.976 Removing: /var/run/dpdk/spdk_pid83722 00:21:50.976 Removing: /var/run/dpdk/spdk_pid83831 00:21:50.976 Removing: /var/run/dpdk/spdk_pid84486 00:21:50.976 Removing: /var/run/dpdk/spdk_pid84516 00:21:50.976 Removing: /var/run/dpdk/spdk_pid84551 00:21:50.976 Removing: /var/run/dpdk/spdk_pid84803 00:21:50.976 Removing: /var/run/dpdk/spdk_pid84838 00:21:50.976 Removing: /var/run/dpdk/spdk_pid84875 00:21:50.976 Removing: /var/run/dpdk/spdk_pid85292 00:21:50.976 Removing: /var/run/dpdk/spdk_pid85309 00:21:50.976 Removing: /var/run/dpdk/spdk_pid85553 00:21:50.976 Clean 00:21:50.976 18:43:04 -- common/autotest_common.sh@1447 -- # return 0 00:21:50.976 18:43:04 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:21:50.976 18:43:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.976 18:43:04 -- common/autotest_common.sh@10 -- # set +x 00:21:51.234 18:43:04 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:21:51.234 18:43:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.234 18:43:04 -- common/autotest_common.sh@10 -- # set +x 00:21:51.235 18:43:04 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:51.235 18:43:04 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:51.235 18:43:04 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:51.235 18:43:04 -- spdk/autotest.sh@391 -- # hash lcov 00:21:51.235 18:43:04 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:51.235 18:43:04 -- spdk/autotest.sh@393 -- # hostname 00:21:51.235 18:43:04 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:51.493 geninfo: WARNING: invalid characters removed from testname! 00:22:18.031 18:43:28 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:18.968 18:43:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:22.283 18:43:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:24.812 18:43:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:27.366 18:43:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:30.649 18:43:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:32.550 18:43:46 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:32.809 18:43:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.809 18:43:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:32.809 18:43:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.809 18:43:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.809 18:43:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.809 18:43:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.809 18:43:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.809 18:43:46 -- paths/export.sh@5 -- $ export PATH 00:22:32.809 18:43:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.809 18:43:46 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:32.809 18:43:46 -- common/autobuild_common.sh@437 -- $ date +%s 00:22:32.809 18:43:46 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715885026.XXXXXX 00:22:32.809 18:43:46 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715885026.b8mG3s 00:22:32.809 18:43:46 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:22:32.809 18:43:46 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:22:32.809 18:43:46 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:32.809 18:43:46 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:32.809 18:43:46 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:32.809 18:43:46 -- common/autobuild_common.sh@453 -- $ get_config_params 00:22:32.809 18:43:46 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:22:32.809 18:43:46 -- common/autotest_common.sh@10 -- $ set +x 00:22:32.809 18:43:46 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:32.809 18:43:46 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:22:32.809 18:43:46 -- pm/common@17 -- $ local monitor 00:22:32.809 18:43:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:32.809 18:43:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:32.809 18:43:46 -- pm/common@25 -- $ sleep 1 00:22:32.809 18:43:46 -- pm/common@21 -- $ date +%s 00:22:32.809 18:43:46 -- pm/common@21 -- $ date +%s 00:22:32.809 18:43:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715885026 00:22:32.809 18:43:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715885026 00:22:32.809 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715885026_collect-vmstat.pm.log 00:22:32.809 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715885026_collect-cpu-load.pm.log 00:22:33.745 18:43:47 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:22:33.745 18:43:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:33.745 18:43:47 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:33.745 18:43:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:33.745 18:43:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:33.745 18:43:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:33.745 18:43:47 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:33.745 18:43:47 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:33.745 18:43:47 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:33.745 18:43:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:33.745 18:43:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:33.745 18:43:47 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:33.745 18:43:47 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:33.745 18:43:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:33.746 18:43:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:33.746 18:43:47 -- pm/common@44 -- $ pid=87287 00:22:33.746 18:43:47 -- pm/common@50 -- $ kill -TERM 87287 00:22:33.746 18:43:47 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:33.746 18:43:47 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:33.746 18:43:47 -- pm/common@44 -- $ pid=87289 00:22:33.746 18:43:47 -- pm/common@50 -- $ kill -TERM 87289 00:22:33.746 + [[ -n 5110 ]] 00:22:33.746 + sudo kill 5110 00:22:33.755 [Pipeline] } 00:22:33.776 [Pipeline] // timeout 00:22:33.781 [Pipeline] } 00:22:33.798 [Pipeline] // stage 00:22:33.804 [Pipeline] } 00:22:33.822 [Pipeline] // catchError 00:22:33.832 [Pipeline] stage 00:22:33.835 [Pipeline] { (Stop VM) 00:22:33.850 [Pipeline] sh 00:22:34.129 + vagrant halt 00:22:37.437 ==> default: Halting domain... 00:22:44.006 [Pipeline] sh 00:22:44.284 + vagrant destroy -f 00:22:47.569 ==> default: Removing domain... 00:22:47.580 [Pipeline] sh 00:22:47.858 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:47.866 [Pipeline] } 00:22:47.883 [Pipeline] // stage 00:22:47.888 [Pipeline] } 00:22:47.904 [Pipeline] // dir 00:22:47.908 [Pipeline] } 00:22:47.924 [Pipeline] // wrap 00:22:47.930 [Pipeline] } 00:22:47.947 [Pipeline] // catchError 00:22:47.956 [Pipeline] stage 00:22:47.958 [Pipeline] { (Epilogue) 00:22:47.974 [Pipeline] sh 00:22:48.257 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:53.574 [Pipeline] catchError 00:22:53.576 [Pipeline] { 00:22:53.590 [Pipeline] sh 00:22:53.868 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:53.868 Artifacts sizes are good 00:22:53.876 [Pipeline] } 00:22:53.892 [Pipeline] // catchError 00:22:53.902 [Pipeline] archiveArtifacts 00:22:53.908 Archiving artifacts 00:22:54.079 [Pipeline] cleanWs 00:22:54.089 [WS-CLEANUP] Deleting project workspace... 00:22:54.089 [WS-CLEANUP] Deferred wipeout is used... 00:22:54.095 [WS-CLEANUP] done 00:22:54.096 [Pipeline] } 00:22:54.112 [Pipeline] // stage 00:22:54.118 [Pipeline] } 00:22:54.132 [Pipeline] // node 00:22:54.137 [Pipeline] End of Pipeline 00:22:54.167 Finished: SUCCESS